Developer Platform (August 2020)

Importing course content packages using APIs

«  Investigating role permissions   ·  [   home  ·   reference  ·   community   |   search  ·   index   ·  routing table   ·  scopes table   ]   ·  Service administration topics  »


Brightspace provides APIs that you can use to automate or script the process of importing course content packages into existing course shells. This walk-through example shows a rudimentary script that demonstrates how to use the API to manage the two-step course import process.

Starting point. The example assumes several things you can already do, or have already done.

The rest of this topic walks through the body of a Python script, explaining what it does at each stage.


This script is not meant for real production use; it does nothing in the way of robust error handling. It’s sole purpose is demonstrate in a simple example the kind of workflow the Brightspace course import APIs can support.

Handling the command line

The script uses a number of Python libraries (including Kenneth Reitz’s requests package):

argparse. Handles command line parsing.

requests. Handles the operations of making the Brightspace API web-service calls.

time. Used so that our script can sleep during the import operation, to avoid over-polling Brightspace.

d2lvalence. Used to handle construction of the user context object that makes it easy to do API authentication.

#!/usr/bin/env python3

import argparse
import requests
import time

from d2lvalence import auth as d2lauth

Here we capture the version of the LE product component available on our back-end service (this example assumes Brightspace 10.6.9, offering the LE product component’s version 1.21 API contract). We also have cached values we can use to rebuild the application and user context for use with ID-Key authentication (we put in placeholders .


User ID/Key and Application ID/Key values are security-sensitive; if you write them into a script like this in production, you should realize that this is the same as storing passwords in a script. Ideally, a script you use in production would have a much more robust method for doing authentication.

# LE API version
_LE = '1.21'

# cached user credentials
_cached_user_context = {'anonymous': False,
                        'encrypt_requests': True,
                        'host': '<the_Brightspace_domain_name>',
                        'server_skew': 0,
                        'user_id': '<the_IDKey_auth_userID_value>',
                        'user_key': '<the_IDKey_auth_userKey_value>'}

# cached application credentials
_cached_app_context = {'app_id': '<the_IDKey_auth_applicationID_value>',
                       'app_key': '<the_IDKey_auth_applicationID_value>'}

Next we do command line argument setup and parsing. This part of the script makes it obvious that the script requires three command line options (and supports a fourth optional one):

–course-package. File path location of your course content package to import.

–course-offering-id. D2LID for the course-offering you’re importing into.

–job-token. If you’re already in the middle of a course import job, you can use the script with the –job-token argument to not initiate another import job, but merely check on the status of an existing job.

# handle command line args
parser = argparse.ArgumentParser()
parser.add_argument('--course-package', type=str, required=True,
             help='Absolute path location of the course package to import')
parser.add_argument('--course-offering-id', type=str, required=True,
             help='D2L system identifier for the course offering to receive the import')
parser.add_argument('--job-token', type=str,
             help='Job token if you just want to check on status of a known import job')
args = parser.parse_args()

Setting up the authentication context

This short scrap of code rebuilds a user context for ID-Key authentication based on cached values written into the script. This isn’t secure or robust, and is only done for demonstration purposes here; ideally if you put a script into production to do administrative tasks like this, you would have a more robust way to regenerate your user context.

user_context = d2lauth.fashion_user_context(app_id=_cached_app_context['app_id'],

Now that you have a user_ctxt object, you can use its API to either decorate URLs for authentication, or, even more seamlessly, you can use it with the requests package directly as an authentication helper object.

Wrapping the web requests/responses

These three helper methods exist only to keep a nice separation between the part of the script that actually wants to use the requests library, and the part of the script that wants only to “make an API call” and do things based on the (assumed to be successful) result.

# helper functions

# given a response object:
# - check its status, and turn any non-200 into an exception
# - return its content as json (in a more robust script, we'd do more
#   sophisticated checking before returning content from the response)
def _fetch_content(r):
    return r.json()

# make an HTTP GET call:
# - use the user_context as an authentication-handler
# - use the `scheme` and the `host` in the user context for URL forming
# - pass on any other keyword arguments we get handed
def _get(route, user_context, **kwargs):
    kwargs.setdefault('auth', user_context)
    r = requests.get(user_context.scheme + '://' + + route, **kwargs)
    return _fetch_content(r)

# make an HTTP POST call:
# - use the user_context as an authentication-handler
# - pass on any other keyword arguments we get handed
def _post(route, user_context, **kwargs):
    kwargs.setdefault('auth', user_context)
    r = + '://' + + route, **kwargs)
    return _fetch_content(r)

Note that these helper methods are agnostic about the exact APIs being called. We should also put wrappers around the two API calls that our script is going to use:

Here’s the code that handles these two API calls (note that it’s quite fragile; it assumes that the returned value from making each API call is the expected JSON from a successful response; the underlying _get and _post routes will raise an exception if the calls don’t succeed):

def _make_course_import_job(user_context, course_id, course_package, **kwargs):
    import_job_route = '/d2l/api/le/{0}/import/{1}/imports/'.format(_LE, course_id)
    kwargs.setdefault('files', {})
    kwargs['files'].update( {'file': (course_package, open(course_package, 'rb'), 'application/zip')} )
    return _post(import_job_route, user_context, **kwargs).get('JobToken')

def _get_import_job_status(user_context, course_id, job_token, **kwargs):
    job_status_route='/d2l/api/le/{0}/import/{1}/imports/{2}'.format(_LE, course_id, job_token)
    return _get(job_status_route, user_context, **kwargs).get('Status')

Handling the business logic

Now that we have wrapped the underlying HTTP transport layer, and the layer that handles making API calls and extracting the bits of the JSON from successful responses that we need (job tokens, and status strings), we can finally get to main steps of our script (and they turn out to be surprisingly simple):

# main steps
job_token = args.job_token or _make_course_import_job(user_context, args.course_offering_id, args.course_package)

print('Import job {} status:'.format(job_token))

status = _get_import_job_status(user_context, args.course_offering_id, job_token)
while 'COMPLETED' not in status:
    if 'IMPORTING' in status:
        # the importing stage takes a while, so let's not over-poll the server
    new_status = _get_import_job_status(user_context, args.course_offering_id, job_token)
    if new_status not in status:
        status = new_status

First, we make the API call to get the job token for the course import job we care about: either we already have one (args.job_token or we call our internal API call function to make a new course import job and fetch back the job token).

Then, we get the status of the course import job, print it and loop around the status until we get (and can print) a COMPLETED status. The first few steps of the course import workflow (UPLOADING, and READYTOIMPORTNATIVELY if the package can be imported natively) happen relatively quickly, so we don’t bother sleeping around those calls. The IMPORTING step, however, will take a while, so once we get to that stage, we slow down the polling on status to once every 60 seconds.

The last if clause in the script is a convenience that reduces the printed output: the script only prints out the status when it reaches a new stage, so it doesn’t continually print a status over and over again.

What you can do from here

With this script you can see the simple workflow around course import automation.

To start with, you need to know the D2LID values for the course shells to receive your import packages. However, you could extend this script to create new course shells via the API, attaching those course shells to course templates that already exist in your Brightspace instance. Once you know the D2LID of the newly created course offering, you can turn that around and pass it to the course import API.

You can also make much better use of the status values returned from the course import process, to know when a course import job is done, and help your script decide when to move on to another course import job (if you’re seeking to automate the import for a series of courses and packages).

Finally, to prevent polling entirely, you can alter the _make_import_course_job function to provide the callbackUrl query parameter that API allows. Brightspace will then post to your callback when the import job completes, successfully or unsuccessfully. In order to use this workflow, of course, you have to have a small web-service that your Brightspace can reach to receive that callback.

«  Investigating role permissions   ·  [   home  ·   reference  ·   community   |   search  ·   index   ·  routing table   ·  scopes table   ]   ·  Service administration topics  »