Homeland Security and other organizations are working on ways to improve the technologies used at airport checkpoints, with the T.S.A. set to roll out new CT systems that can automatically identify items hidden in passenger baggage, and at least one company, Smiths Detection, exploring the use of neural networks at security checkpoints.

In theory, neural networks can accelerate the evolution of airport security, mainly because such systems can learn so quickly from data, relying less on individual rules and code painstakingly built by engineers.

To help data scientists and machine-learning researchers train their algorithms, Homeland Security is supplying more than 1,000 three-dimensional body scans.

The department is not sharing scans of the more than two million people screened each day at the nation’s airports. Instead, T.S.A. workers volunteered to help create the data for the contest from scratch, repeatedly walking through a set of test scanners at a laboratory in New Jersey. In some cases, the workers carried concealed items through the scanners, and these images are carefully labeled.

By analyzing this data, neural networks and other algorithms can learn to pinpoint concealed items on their own. Jeremy Achin, a founder and the chief executive of the data analysis company DataRobot, said that neural networks were well suited to such a task.

But he also warned that the technology could make mistakes and that in some cases it could be vulnerable to bad actors. Research has shown that after analyzing the performance of an image-recognition system driven by a neural network, miscreants could mark or otherwise alter items in ways that fool the system into seeing things that are not there — or failing to see things that are.

For those reasons, the immediate aim is not to build technology that replaces human screeners but to find a way of removing some of the burden from those screeners, Mr. Goldbloom said.