It’s been a long road, winding through swaths of government data, unstructured and unrefined, toward a new vision of public service where government anticipates the needs of its citizenry. There’s been a lot of downtime and detours in between for technology and mission to catch up with that vision.

But a combination of IT modernization, cloud service adoption, burgeoning data pools, content management advances, and the ever-growing imperative to convert data into knowledge, finally has AI positioned on the cusp of wide–and eventually ubiquitous–use by the private and government sectors alike.

Big Data–the exponentially-expanding ocean of transactional data created by tech-connected humans, governments, businesses, and other organizations when they interact–doesn’t celebrate a birthday every year, because nobody really knows when it was born. Sometime between the adoption of cuneiform writing and the invention of mainframe computing is probably close enough.

Sometime around 2010, give or take, the idea found its present name, and the latest modern concept of Big Data took firm hold as a commercial science with vital usefulness often better understood in theory than in practice. It’s been an uphill battle to maximize its usefulness, because there is only so much that human algorithmic muscle can do on its own to divine insight from data, but that era of possibility is steadily giving way to the age of probability.

Industry Push

How do we know AI is finally on the cusp of its mainstream breakout? Because of the vibrant market that has grown up to take unstructured “dark data” and ease its conversion to useful data from which insights can be generated. And because of the leading institutions of Big Government that are increasingly taking steps to clear the pathways toward its use.

On the industry side, here’s what the tech sector is providing so that the private and public sector can make AI a reality:

  • Easier pathways to cloud-based services, and their promise of nearly unlimited compute capacity to crunch Big Data;
  • Tools that take unstructured data that has limited utility in its present form and convert it into useful data that can yield insights; and
  • Platform development approaches that make possible faster and more efficient development of applications that give context to unstructured data and speed integration into various business processes.

That final piece is an oft-overlooked step. Legacy content management in the Federal government was not built for the scale of cloud computing or the rigor of AI services.

For decades, content has existed in disparate siloes, and as a result, agencies might use twenty or more systems to manage their content and business processes. And as more government services go digital, the associated explosion of documents, content, and queries without context have pushed these legacy systems to the brink.

All of that content, meanwhile, holds untapped and actionable insights. AI finally offers the promise of extracting information from those various sources and creating metadata–or context–that can help government understand patterns and needs of its citizens.

Platform technologies have risen to meet the need, enabling agencies to unify previously siloed management systems–for FOIA management, case management, records management, and so on–aggregate data from all those sources, and feed it into more advanced analytical tools to shape the future of government business.

These platforms, many built on open source technology and using open application programming interfaces (APIs), are being deployed seamlessly on cloud infrastructure and integrated with the burgeoning AI service market.

The Federal push towards digital transformation comes as more and more data flows in. But government finally seems to be envisioning an environment in which insights flow out.

What do you think about MeriTalk’s news coverage? Take our reader survey and be entered to win one of ten $20 Amazon gift cards. Take the survey.

“What I have found is that the most forward-looking agencies are adopting a platform approach that enables standardization in their digital transformation efforts,” said Tony Franzonello, Vice President of Federal Sales at Alfresco. “By bringing development and deployment skills and energies to a unified framework, an agency can more efficiently design and roll out a portfolio of digital transformation projects, including AI and machine learning.”

Federal Push

Here’s just a sampling of the many wheels in motion on the Federal government policy side that are changing the equations for thinking–and action–on the content, cloud, and AI fronts:

The 21st Century Integrated Digital Experience Act (IDEA), signed into law by President Trump on Dec. 20, 2018, has mandated that agency heads “review public-facing applications and services to ensure that those applications and services are, to the greatest extent practicable, made available to the public in a digital format.”

Within two years of the bill’s enactment, agencies must also ensure that “any paper based form that is related to serving the public is made available in a digital format.”

That presents a greater burden on the business of government, largely supported by its IT function. And so, the White House is now seeking to create effective guideposts to not only govern that increase in digital volume, but more effectively harness it.

After releasing a set of data principles in an initial draft, Federal CIO Suzette Kent said that the White House’s Office of Management and Budget will soon finalize the Federal Data Strategy, which is expected to align with the newly-minted OPEN Government Data Act.

At the very top of the executive branch, President Trump last month issued an AI executive order that focuses on prioritizing Federal government investments in AI-driven projects, and development by Federal agencies of research and development budgets for AI that will support core missions. The order also directs agencies to lend hard resources–including data, models, and compute resources–to academia and industry to seed the technology’s growth.

Kent also said her office is working to create, and finalize, a menu of policies that will drive agencies closer to adoption of cloud services and the advanced applications they can yield, including AI applications. “To be able to fully leverage these automated technologies, we have to have high quality data, and we have to have mechanisms for sharing that data,” she said.

Agencies will look to establish those sharing mechanisms using cloud, development platforms, and open APIs, with the new automation tools built right on top of them.

“Artificial intelligence and Machine Learning are two key building blocks to creating true systems of engagement,” said Franzonello, noting the technology’s potential impact on improving citizen experience. “Platforms make it easier to roll AI and ML services into applications, especially those in the cloud, because they were designed in the cloud era for API economy,” he added.

On the longer-range front, Kent said she is working on a policy to support faster Federal agency use of automation technologies. “We have a few agencies that are already on the journey, but not everyone. In some places, there are questions. So what we’re doing for this year is putting some guardrails in place, and defining clearly the automated technologies we want to focus on,” she said. The policy will include guidance on when to use automation, which categories of automation fit different types of workloads, fitness-for-use models, and oversight requirements.

Managing the Pushback

Alongside these executive branch actions, which amount to automation advocacy tempered with cautions, Congress is also wading into the AI debate. Legislators have focused mainly on efforts to address ethical concerns surrounding the technology, but nothing that looks even remotely like a stopper.

At a high level, lawmakers are continuing what has become several years of debate over consumer data privacy and security issues with an eye to affording better protections for such data.

But the online industry’s persistent and deep-pocketed lobbying efforts on those issues are likely to produce some combination of these outcomes: 1) the issues continue to be kicked down the road; and/or 2) Congress ends up passing legislation that is backed by industry, featuring preemption of state privacy statutes, and light-handed regulation of the industry by the relatively benign Federal Trade Commission.

Platform development and deployment methodologies have enabled more effective oversight of data as it enters the AI gray area. As the legislative guardrails go up, the ability to manage compliance requirements and safeguard data will take on increasing importance for Federal agencies. Platform application development allows strict policies for information governance to be extended to third-party services as agencies build more apps in the cloud.

But calls for proper data stewardship in the cloud and AI era will remain persistent, it seems.

The latest significant action in Congress has come in the form of a non-binding resolution calling for ethical AI development. That resolution, while putting the focus on privacy and discrimination issues, most notably has the backing of some of the tech sector’s business giants and the industry groups that represent them–indicating that its passage doesn’t represent a big hurdle to the industry going forward.

Read More About
About
Kate Polit
Kate Polit
Kate Polit is MeriTalk's Assistant Copy & Production Editor covering the intersection of government and technology.
Tags