What does it take to get up and running on wrnch Engine?

Developers can develop and deploy engaging applications in different environments. To provide maximum performance and efficiency in emulating human vision, the wrnch Engine leverages: 

  • Hardware chip sets to manage the flow between the processing units, memory, and input / output devices. AI technologies, such as computer vision, deep learning and neural networks, require higher computer power to mimic the power of the human brain. Your applications can run on any hardware device with sufficient processing speed to process images in near, real-time.
  • High-performance computer processing units (CPUs) and graphic processing units (GPUs) accelerate the creation of images in a frame buffer intended for output to a display device. 
  • High-performance operating system support real-time processing of human pose data. 
  • Cameras that processes RGB (red, green and blue) bands of light similar to the human eye can be used to capture human movement in the form of images or videos. The wrnch Engine is designed to work with any RGB camera.
  • Deep learning inference engines apply logical rules to a knowledge base to deduce information. The wrnch Engine leverages inference engines to rapidly analyze video streams against trained neural networks to detect human pose data It can detect any person from images accurately, regardless of age, gender, skin tone, background, etc.

For information on hardware and software requirements for your environment, visit our Developer Portal.



Developers have a choice of deployment options for applications built using the wrnch Engine and Software Developer’s Kit.  

  • Mobile devices: Transform mobile devices with their built-in cameras into a real-time computer vision systems. This deployment method allows you to create interactive mobile applications, making motion capture accessible to anyone with a smartphone, as well as giving users the ability to capture movement in its natural environment.
  • Personal computers: Capture human motion from a webcam for input to the wrnch Engine running on a personal computer. This deployment method leverages graphic processing capabilities and allows for higher latency and accuracy. 
  • Embedded in devices: Turn automated devices, such as robots, personal assistants or cars, into computer visions systems that can see and interact with humans in real-time. Embedded capabilities allow you to build custom solutions that monitor human interaction for safety or security, for example monitoring patients’ movements to prevent falls and reduce accidents.
  • Cloud: Extract human pose and motion data from videos uploaded to the wrnch Engine running in the cloud to ensure scalability and flexibility based on your business needs. This deployment method allows for offline use. You can upload your videos and images to the Cloud and then run custom algorithms to analyze posture & movement.

Deployment options vary by release; visit the Developer Portal for details.