Advanced Distributed Learning Initiative Logo

The Advanced Distributed Learning Initiative

R&D Contracts (BAA)

The ADL Initiative contracts with external vendors and academic institutions to perform some research and development tasks.

Broad Agency Announcement

The ADL Initiative uses a Broad Agency Announcement (BAA) contracting mechanism to solicit and award external R&D contracts. The U.S. Government uses BAAs to award contracts for basic and applied research and advanced technology development (so called, budget activities 6.1, 6.2, and 6.3). The ADL Initiative primarily uses our BAA to contract for Advanced Technology Development (budget activity 6.3).

Our official BAA solicitation is posted on FedBizOpps. It lists general areas of interest. Each year, we also identify more refined research targets, which are listed on this page (below).

ADL Initiative BAA Solicitation on FedBizOpps

How to Participate

Businesses and academic institutions may submit project ideas for consideration. The official FedBizOpps link includes full participation details, and a summary is provided below for convenience (the primary reference is the FedBizOpps guidance).

Offerors are encouraged to carefully review the BAA, and to ask for clarification and assistance as required. The Small Business Administration also offers a number of Contracting Assistance Programs for Federal contracts.

Step 1: Quad Charts (optional but recommended)

You can start at any point in this process, but we recommend you first submit a quad-chart. Quad-charts let you pitch ideas quickly, and if we believe the idea has merit, fits our mission, and can be potentially funded then we'll request a more detailed white paper. You can use this template for your quad charts: Quad Chart Template.

Step 2: White Papers (optional but recommended)

If the ADL Initiative is interested in your quad-chart submissions, then the Government may request a corresponding White paper. These are due within 30 days of request. You're encouraged to contact the ADL Initiative to discuss project ideas prior to White paper submission. Offerors may use their own format for the white paper but are encouraged to consider Heilmeier's Catechism as part of their organizing structure.

Step 3: Full Proposals

After white paper review, offerors may be asked to submit a formal proposal. Proposal guidelines are provided in the BAA. These should include detailed technical and management plans, pricing data, execution timelines, and a proposed statement of work. After receiving a formal request for proposals, offerors have 30 days to submit. Once a proposal is requested, open discussions with ADL Initiative personnel are no longer permitted other than with explicit Contracting Officer involvement and oversight. Offerors should be notified of the award decision within 4–6 months of Proposal submission.

Review Process

Submitted products are reviewed by technical/scientific personnel who are knowledgeable within the subject matter to determine if the product is consistent with the intent of the BAA and is of interest to the government. Quad Charts will primarily be considered on the scientific/technical merit and the importance to agency programs. White papers will be reviewed on the scientific/technical merit, the importance to agency programs, and the proposed price. Proposals are evaluated on the criteria presented in the BAA: scientific and technical innovation and benefits to the ADL Initiative; scientific and technical approach; management, and cost / benefit.

Products having insufficient scientific/technical merit, insufficient relevance to the ADL Initiative mission, or those in areas for which funds are not expected to be available, may not advance further in the process. In instances where the proposed cost increases substantially from earlier ROM estimates, the offeror should provide clarification for the price increase; inadequately explained deviations between the white paper and proposal ROMs will negatively impact the proposal’s evaluation.


The purpose of this BAA is to solicit industry participation in the furtherance of the Distributed Learning modernization and policy mission of the ADL Initiative. Towards that end, the ADL has stood up an overarching Research and Development Science and Technology (S&T) strategy known as the “Total Learning Architecture” (TLA).

The TLA seeks to decouple learning organizations from reliance on a Learning Management System (LMS) centric IT infrastructure, and instead, use a loose federation of web-based services and data sources that communicate human performance information via standardized data contracts. The activities of learning and evidence of human performance of interest to the TLA, and to achieving a “lifelong learning” path for federal personnel, include traditional web-based content, classroom experiences, simulations, and even on the job training and work experiences. These experiences provide data of “learning activities” within an overall “learning ecosystem”. Each of the other topics in this BAA address an aspect of this ecosystem.

1. Learner Profiles for DoD

The promise of new TLA applications stems from the ability to create, collect, transmit, process, and archive information on a massive scale. The vast increase in the quantity of personal information that is being collected and retained, combined with the increased ability to analyze it and combine it with other information, is creating valid concerns about the ability of different TLA components to manage these quantities of data responsibly. Desired outcomes include developing a TLA-compliant, extensible, and open-source learner profile for use within the DoD and Government.

Learner profiles typically include a broad range of data including demographic data, data about student interests, learning preferences, inter- and intra-personal skills, existing competencies and those that need to be developed. It can include information on credentials or other student learning strengths, needs and types of supports that have been successful in the past. The ADL Initiative is researching how learner characteristics influence the intelligent delivery of content to optimize learning across a person’s entire career, using an aggregation of information about a learner populated from various sources of information. Respondents should address any or all of the following topics:

  • Research policy, technology, cybersecurity, and protected information requirements: Identify best practices for managing a learner profile across the continuum of lifelong learning. Discuss different approaches to protecting personally identifiable information and outline key considerations for making portions of the learner profile portable and/or controlled by the learner.
  • Define data concepts for learner profiles: Define approaches, processes and methodologies for creating, managing and storing learner profile data. Identify and categorize the different data elements a learner profile should include. Describe lifecycle considerations for each data type and identify the authoritative sources where each data type can be discovered. Address both human readable and machine-readable data.
  • Current best practices for learner profiles: Analyze and report current industry, government, and academic best practices for capturing records of the knowledge, skills, attitudes and other attributes of individuals as elements within their learning enterprises. Investigate how other communities and/or industries use learner/personnel profiles within their environment and the applicability of these approaches to the TLA. Report should address communications and connections between learner profile data layer and competency management service layer.
  • Develop proof-of-concept a digital learner profile: Create and implement a reference implementation of a learner profile. This profile must work within the TLA and should post updates via an established communication protocol that supports some kind of publication/subscribe logical bus architecture.
  • Develop approach for non-repudiation of credentials: Create an approach for ensuring traceability between portable universal user ID tags and real personal data, including PII, in a controlled environment, considering use cases of failed UUID or single sign on security servers, non-homogenous single sign-on or directory services, and portability of users from one campus or system enclave to another, including at multi-level security.

2. Course Catalog and Activity Registry

An Activity Registry is an approach to capturing, connecting and sharing data about learning resources available to an organization. Key features include the ability to generate and manage content metadata, manage taxonomies and ontologies, alignment of content with competencies, generate and manage paradata, perform semantic search services, and create machine-actionable metadata for AI-based recommenders. Desired outcomes include documenting the path for developing, evaluating and transitioning a common course catalog and activity registry.

Proposals for this topic should address any or all of the following:

  • Research on policy, technology, cybersecurity and protected information requirements: Identify best practices for managing an activity registry that encompasses all the different types of learning activities an individual might encounter across the continuum of learning. Discuss different approaches that might be required for different types of instructional content.
  • Content metadata: Identify an approach for describing various learning activities a learner will encounter across the continuum of lifelong learning. Investigate various metadata formats, standards and specifications to inform an approach that describes content in support of the Future Learning Ecosystem. Create an approach that considers machine learning metadata (for example, relevance and engagement) beyond the more commonly available human-readable format. This includes consideration from a paradata perspective.
  • Paradata: Identify a taxonomy and approach for gathering, verifying, and storing Paradata within the TLA ecosystem. The TLA envisions that organizations that consume a learning resources will also capture information about how a resource is used including context, user feedback, user ranking, rating, annotations, etc. Over time, paradata, usage data, and third-party analytical data may become more valuable and more useful than curated cataloging metadata for discovery and understanding which learning resources are effective.

3. Data Labeling

The TLA promotes an increasingly complex world of interconnected information systems and devices. The promise of these new applications stems from their ability to create, collect, transmit, process, and archive information on a massive scale. However, the vast increase in the quantity of personal information that is being collected and retained, combined with the increased ability to analyze it and combine it with other information, is creating valid concerns about the ability of different TLA components to manage these unprecedented volumes of data responsibly. There is an urgent need to strengthen the underlying systems, component products, and services that make learning data meaningful. Data labeling will enable the implementation of artificial intelligence, recommendation engines, and adaptive tutors by creating a uniform approach for labeling data within the Future Learning Ecosystem. Desired outcomes include a data strategy and an associated lifecycle that creates a solid foundation for the TLA to expand upon. Respondents should address any or all of the following:

  • Data labeling strategy: Identify best practices and lessons learned from other big data initiatives such as finance, healthcare, or eCommerce to outline an approach for organizing data within the TLA and the Future Learning Ecosystem. Attach meaning to different types of data and correlate with the different systems that are able to generate this data across the continuum of lifelong learning. Discern the different data types that each TLA component is currently generating and create a strategy for labeling data across these components.
  • Pattern discovery: Develop tools or algorithms to discover relevant patterns in the data. Assign standard meanings to those patterns and explain how each pattern can be used to derive insights and develop models. Develop clustering strategies for how to organize the different data types. Consider both structured and unstructured data that may be generated by the different components of the TLA.
  • Data lifecycle: Define the relevance of different data types over time. Identify an approach for capturing the decay of importance between different data types and identify authoritative sources for generating each data type.

4. Policy Guidance

MIL-HDBK-29612 and other documents provide guidance to all the services for the preparation and evaluation of solicitations and responses for training and education products. Emphasis has been placed in the acquisition system on reducing costs, promoting commercial products and practices, and promoting the use of the latest technologies. Currently, the MIL-HDBK-29612 and other DoD documentation is oriented towards the production of SCORM conformant eLearning media and simulation-based training. Since the wide-spread adoption of SCORM, a number of different eLearning technology and electronic performance support systems have evolved, which are not included within the traditional media analysis templates. In support of the TLA, an extensive review of this handbook and other documents is essential to understanding current acquisition approaches and updating ADL guidance under the DoDI 1322.26.

  • Gap analysis and recommended updates: Perform a gap analysis of the MIL-HDBK-29612 handbook and DoDI 1322.26 fungible references to expose deficiencies in the existing materials and highlight areas that warrant revision, particularly within the context of DoD efforts towards building the Future Learning Ecosystem, embracing learning engineering, and applying enterprise learning analytics. Discuss the benefits of implementing and the consequences of not implementing recommended changes.

5. TLA Specifications and Standards

The TLA is a series of specification and standards that define the technology layers supporting the Future Learning Ecosystem, providing the interoperability essential to managing lifelong learning in its many forms and uses.

Desired outcomes include generating more robust specifications and standards supporting the TLA and transitioning one or more of them to be incorporated into DoDI 1322.26 (directly or as fungible references).

  • Improving xAPI Profiles for DoD use: Review existing xAPI Profiles and identify requirements from DoD stakeholders. Use this information to inform the development of a generalizable “default” xAPI Profile (or integrated set of Profiles) for use in existing and/or under-development eLearning DoD “ecosystems.” Conduct empirical validation testing of this Profile(s) with a DoD partner in a realistic training and/or education setting.
  • Alignment with learning technology standards: Analyze and report on standards bodies (e.g., SISO, IEEE, ISO) and their standards (e.g., S1000D, HPML, LMRI, cmi5) for learning activity providers that should be incorporated by reference into the TLA and evaluated for possible changes to make those learning activity providers more “TLA ready.” Various learning activity providers, from SCORM courses to simulators to smart publications, will interact within the TLA ecosystem. This should include addressing data contracts, enabling diverse learning activity providers to interact with the TLA-enabled Future Learning Ecosystem.

6. Competency and Credential Management (subject to funding)

Competency and credential management are TLA components that generate rich and traceable data about learning experiences and how they relate to people’s capability proficiency estimates and formalized qualifications. Competencies are a formal way of encapsulating the knowledge, skills, abilities, and behaviors that individuals need to be effective in their roles and establish how their roles relate to organizational goals and success. Credentials represent formal assertions of training, education, or experience or other verification from a recognized authority that someone possess certain capabilities.

Competencies are typically defined at different levels of expertise (e.g., novice, intermediate, expert), and they have many non-exclusive and multidirectional relationships with one another. For example, an expert pianist likely possesses some ability in musical theory and might be able to learn other instruments more rapidly. These relationships are defined in “competency frameworks.” Competencies and competency frameworks are the common currency for learning and development. The translation of knowledge, skills, and experiences into universal competency definitions will allow the military to make personnel data—and by extension, the individuals, themselves—more interoperable across systems and jobs, both within and beyond the DoD.

However, many challenges remain. For instance, the competency descriptions currently used by human reviewers fail to adequately support software systems. Further, organizations often define and organize their competencies differently, limiting their interoperability, and even when described universally, the same competency (e.g., leadership) may apply differently in different contexts. Consequently, the ADL Initiative is seeking a robust set of data, metadata, and interaction specifications to represent competencies and competency frameworks. Also, competencies change over time due to many variables including skill decay, impacts of realignment, and evolution of educational processes. Respondents should reflect how their approaches cope with these changes and provide tools for both administrative and learner users to interact with software.

Desired outcomes include an effective reference implementation of enterprise competency and credential management supporting DoD learning ecosystems; including a data strategy and an associated lifecycle that creates a solid foundation for expansion of TLA to expand upon.

  • Competency data schemata lifelong learning, at an enterprise level: Conduct a comprehensive review of all existing competency data (and metadata) schemata, comparing and contrasting their definitions. Define the data elements needed for each Competency and address related technical challenges, such as how to define Competencies that fall into multiple domains, represent uncertainty, or degrade estimates of a person’s competence over time. Discuss different approaches for classifying different types of competencies. Investigate the potential for using shared vocabularies across an enterprise for assigning metadata to each competency.
  • Competency framework definitions: Define principles for Competency Frameworks, such as how to represent the relationships among nodes, make inferences, and visualize the knowledge graphs. Create an approach for defining the many-to-many relationships among competency objects for different roles within an organization. Define a weighting schema that enables the tailoring of influence between different competencies for different roles (for example, “competency object to role” and “competency object to competency object”). Review the literature on existing competency frameworks and create a roadmap for enabling lifelong learning.
  • Competency governance: Outline system-wide policy, process, and governance risks and recommendations, such as how to review and update Competencies and Competency Frameworks over time.