Unknown

Yeah, great question. So the, that structure that I just described, having many layers of these artificial neurons allows these deep learning neural networks to be able to Automatically extract the most important aspects of the data that you're inputting into your model for predicting whatever the outcome is that you're trying to predict with your model. So to have a biological visual system analogy, the way that this works is if you build a machine vision algorithm with a deep learning network, then your input layer will have pixels of an image as the input. And your output layer of your neural network might then be the class that that image corresponds to. So let's say you're building a image classifying machine vision system that is designed to distinguish cats from dogs. In that case, you might have 100 images of dogs that you input and 100 images of cats that you input, and you label all of those images as being either cats or dogs. So we're setting up our deep learning model in a way so the They can learn to approximate pixels that represent a cat with the label cat and pixels to represent a dog with the label dog. And so the hidden layers of this machine vision network automatically learn how to extract the most important information about those pixels in order to represent a cat or a dog or more specifically, to distinguish a cat from a dog and the first layer of those artificial neurons in this many layered artificial neural network, that first layer will come to represent very, very simple aspects of the pixels. So essentially just straight lines at particular orientation. So some of the artificial neurons in that first layer will represent vertical lines, some of the horizontal lines, and 45 degree angles and so on. And then the second layer of artificial neurons and this deep learning network can take in that information about straight line detection. And and those straight lines can be nonlinear Lee recombined so that that second layer of artificial neurons can detect curves and corners. And then you can have a third layer after that, that does even more complex abstraction on the curves and corners, and so on and so on. You can have many, many such layers of artificial neurons and your deep learning network. And each one as you move deeper, can handle more complex, more abstract representations of the input data. And the really, really cool thing about the learning models is that it is able to figure out what these important high level abstract representations are fully automatically from the training data alone. So you don't need to program any of that specifically. And so that's what's made deep learning models so popular suddenly is, as we've had the compute power and the availability of data in the last few years, to be training these relatively beefy models, they can then on their own extract All of these these features from the raw data and solve all kinds of complex problems. So that visual analogy in a way you can you can imagine that then in, in my line of work in my day job at untapped. We're concerned with various models related to human resources. A really common model is predicting the fit of a given job applicant for for a particular job. So, we have clients, big corporate clients, or recruitment agencies that handled millions of applications a year to thousands of different roles. And instead of sifting through all of those applicants with say, a Boolean keyword search, our model can rank all of the applicants, the million applicants that you had over the last year for any one of the rules that you are hiring for and it Does that based on the natural language of the job descriptions, the natural language of the applicants resume? And we've trained this up on hundreds of millions of decision data points where

where a client be that a hiring manager or a recruiter has said, okay, based on this candidate profile, based on this job description, yes, I would like to speak to this candidate or know this candidate is not appropriate for this role. So by having this huge data set, and then a deep learning model that's taking in that natural language from the job descriptions and the resumes at one end, and then this outcome that we're trying to predict is this person a good fit or not a good fit for the rule. We then have this deep learning architecture in the middle, where the earliest levels of the architecture can look for very simple aspects of the natural language. And as you move deeper and deeper into the network, we can model increase singly complex, increasingly abstract aspects of the natural language that is being used on the resumes and the job descriptions. So you could find, because of because of the way that that works, you could end up in a situation where two candidates who have none who have no overlapping words whatsoever on the resumes could be the top two candidates for a given job description. Because this deep learning hierarchy is able to distill from individual words, the the contextual, holistic meaning of an entire candidate profile.