Our Technology

ARTIFICIAL INTELLIGENCE

Deep Learning

The training of Deep Neural Networks (DNN) to compute and carry out complex tasks is known as deep learning. The DNN are exposed to a database consisting of terrabyte datasets representing a variety of different forms of the same object. As data is understood through the relations of objects, inaccuracies or false predictions made by a DNN are solved with the insertion of more data.

Through this method, the neural networks develop the power and flexibility to carry out complex tasks by understanding how to aggregate a nested hierarchy of concepts. Each concept becomes defined within this hierarchy as a correspondence to similar nodes, giving the ability to characterise abstract objects concepts in relation to more tangible forms. In this way, the structural dependencies are computed and embedded into a mapping function which creates learned representations of disentangled dependencies between either output or input variables.

The utilisation of neural networks can then be explained as the extraction of information from an input into a more structured body of knowledge as an output. As such deep learning is an emulation of the manner in which human neural networks learn to decipher objects and information.

While this technology opens up many opportunities for computing tasks, training DNN’s requires large amounts of processing power and time. The high computational requirements necessitate using either physical onsite or cloud GPU’s in collaboration with traditional CPU’s. The GPU is a graphics processing unit, whose main task is to break down tasks into minute seperate ones whilst simultaneously computing their results. While CPU’s have a few cores, GPU’s consist of thousands, giving the GPU the ability to calculate large matrix operations efficiently and rapidly.

Instance Segmentation & Human Body Segmentation

Segmentation is the process where pertained algorithms are able to differentiate single individual objects in an image, enabling AI to accurately analyse, process and develop image inputs. The AI predicts discernable objects combining pixels of images into separate points, or super-pixels, removing the need to understand each individual pixel and allowing the AI to separate that object, even with general pairwise potentials, from the surrounding environment.

Instance Segmentation is the ability to understand the difference between two individual objects in the same image of the same classification; for example two arms as part of a human body.

Human body segmentation is an example of applied instance segmentation, by training AI neural networks with data in the form of human images, exploiting similar anatomical structures among human bodies and enabling the exchange of parsing results of one body to another. These estimated results enable the AI network to reliably disambiguate the key points of a human body form in a single image.

Human Body Pose & Skeleton Estimation

Human Body Estimation, a modeling or prediction of a specific pose and the body shape, is the configuration of bodily joints, or keypoints, within a human body in images or videos. So, in dealing with the input data of a human body as an image, the 3D pose or skeleton estimation is the computing task which produces the spatial position of the depicted person, directly influencing the 3D human shape of a virtual body.

The use of Pretrained Algorithms make this possible even with the inherent ambiguities of a human body frame. The deep learning process of mapping out the relations and dependencies of what constitutes accurate, believable human body features from human pose data sets. This is done with the estimation of a 3D human mesh revealing the divisibility of the human form and its keypoints required for movement or accurate posing.

The use of deep learning creates pretrained algorithms able to be invariant to the details of the image which assist deciphering the joints of the body. This includes among other factors background scenes, lighting, clothing shape and texture, skin color and image imperfections. Keypoint notation can occur even, parts of the body are blocked in an image, as the deep learning mapping ensures that prediction occurs using real data on what a human body represents.

Cloth Simulation

Deep learning on DNN’s also provides a tool for creating pretrained algorithms which may simulate the qualities of cloth in a virtual world. Neural networks achieve this by mimicking the physically-based virtual renditions of miniature cloth with corresponding characteristics to that of the object cloth texture. The application of this generates fast and accurate simulations of cloth experiencing a variety of environments in a virtual setting.

The system has similarities with that of human body skeleton estimation as pretrained algorithm map input data match with learned data sets to model a hierarchy of micro cloths elements, performing a virtual rendition of the material down to the individual strands.  Under this method, the traits of a material including its elasticity, weight, coarseness and strand thickness are modeled with natural resembling results.

Cloud Computing Technology

SOA Architecture

An important component of the cloud computing technology is Service Oriented Architecture (SOA) which offers business customers cloud based solutions, depending on their particular requirements. The implementation and deployment of big processes over clouds require high speed technical computation, the use of GPUs, to distribute processes.

The advantages of SOA are that it is elastic, reactive to changes in customer needs with simultaneous communication across independent networks. Constant real time communication therefore makes rapid changes to services rendered based on their demand.

This is possible as SOA centralises high capacity networks and high performance computing, enabling hardware reuse and distribution for a variety of disparate business needs, models and products. Under this system, the costs and responsibilities of hardware support, development and maintenance are separated from the end user, creating huge reductions in expense, while increasing productivity, efficiency and performance of network based computing tasks.

Cloud Virtualisation

Another vital tool for Cloud Computing Technology is Cloud Virtualisation. This element of the cloud ensures that cloud services are provided in an efficient manner. Computing capabilities are embedded into the virtual cloud, where the computer systems are presented as an abstract computing platform, functioning in a similar manner to physical computing resources.

This then operates as a means to weigh the load of services, providing flexible services when and as they are required. Systems of Cloud Virtualisation are then a tool enabling ready availability along with simple means of scalability and constant reliability.

Virtual Reality

Virtual Reality Physics

Virtual reality is the projection of a simulated virtual world. High Capacity Computing available today allow for simulations of real world physics in life like detail. The rendition of physics from the micro scale upwords enables virtual worlds with realistic detail.

Products provide immersive experiences that outperform what computer monitors can provide adding more detail and realism to computer generated virtual environments. The physics of these worlds often correspond to that of the real physical world, providing details such as shadows to correspond to light sources and relative shading or texture can be used to ascertain depth and distance of objects. To ensure an effective virtual world clues used by the human brain are programmed into the physics of the world so as to enable human understanding and control within it.

Rendering

Rendering is the generation of an image from a 2D or 3D model. To transfer the data of a model in its stored form to that of a viewable image require the insertion of light and movement from a particular viewed angle. This rendering is a conclusion of both a graphics pipeline and a rendering algorithm, computed with GPU assistance. As either the camera angle or light changes, the 3D position, in regard to its light distance and relation to camera position (viewed position) is computed. In essence, rendering provides users with lit views of modeled data.

Augmented Reality

Augmented Reality (AR) consists of both virtual and real objects acting together simultaneously. The experience is an interactive one as a principal way in which virtual objects and interact with and be interacted with by physical real objects. The real world is enhanced by virtual constructed perceivable data which can be expressed across a variety of sensory mediums such as visual, auditory, haptic, somatosensory and olfactory. Enhancement of the natural effect of AR is possible in the application of physical traits to a virtual object, from data sets, creating lifelike renditions of virtual objects in the physical world.

New opportunities in smartphone technology enable augmented reality to be used through multiple devices simultaneously, creating possibilities for collaborative experiences in augmented environments.

3D Human Body Scanning

The 3D human body scan refers to taking frames of a human body to build a 3D virtual model of the person. The model is determined by geometric samples of the subject, determining the colouring, shape and size.

These data points establish the depth and texture of the images as well as their distance from the 3D scanner. Data is collected in this manner on the distance and angles between these points to understand the distances between each other while building a reliable prediction of the position of each point on the human body.

The scanners gyro, acceleration calculator and movement are utilised to place the images location and reference with frames of the same body. However in predicting an accurate body image also requires AI systems to have knowledge of the human body’s form. Deep learning of DNN networks is therefore used to delineate what constitutes a human form and the interrelations between geometric positions of body points.