Promise made, promise kept. Here’s the novel technique that deduces the depth of each object within an image. The interesting part is that we used the same natural principles of the human eye. Reverse engineering the eyes, we are using triangular positioning to know the distance of the objects from the camera, which is what you see in the picture below.
The advantage of having depth perception, is the accuracy it gives so you can instantaneously tell the positioning of different objects in a scene. Now we’ll be applying our own AI Powershift to this part of the software, to optimize our hardware while giving more results.
In this update we added more accuracy and quality of different object detection by changing the core engine. Now we also added our software which is combined with temperature detection for Thermal Infrared cameras. This feature adds to the complexity of the “All seeing” AI.
In the next few weeks, we will be talking about a novel technique that will incorporate depth recognition. This feature will bring about price reduction in the industry of auto manufacturer in regards to technology for example.
Welcome to another update where we wanted to show you this video. In this video we try to point out the accuracy and quality compared to other engines in the works, at detecting objects. This is only a sample as more objects will be detected overtime for a more complete result. This results in a higher number of object detected overall.
Next, consists of developping a more accurate system.
First off, we would like to welcome everyone in the new year! 2020 has been full of challenges, and 2021 doesn’t feel like giving anyone a break so we’ll push through with everything we got.
Secondly, we’ve been doing just that, working during the holidays. We’ve surprisingly made good progress! We’re currently integrating and optimizing core functionality that will vectorialize each frame of a live video from the start. This “object recognition” and the association between frames will be done on a much higher level of accuracy, as you can compare the two pictures below.
The next step consists of testing live feed, and adding some context. The next time you’ll hear form us, we’ll post a video to demonstrate the difference.
Last update we started working on “integrating regressions to trace back objects and to predict their positions.” Although it’s relatively easy to operationalize, it’s another challenge to have the right algorithm for the right tasks.
To understand, regression concerns modeling the relationship between variables that are iteratively refined using a measure of error in the predictions made by the model.
Regression methods are a workhorse of statistics and have been co-opted into statistical machine learning. This may be confusing because we can use regression to designate the class of problem, and the class of algorithm as well. Thus, regression is a process. One that is very peculiar, since the objectives are the likes of which we have never seen before.
Here’s a look at some of the most popular regression algorithms that are considered:
– Ordinary Least Squares Regression (OLSR)
– Linear Regression
– Logistic Regression
– Stepwise Regression
– Multivariate Adaptive Regression Splines (MARS)
– Locally Estimated Scatterplot Smoothing (LOESS)
Welcome to the 6th update
Here’s a demonstration of the state of Artificial Narrow Intelligence (A.N.I.) in 2020 using the popular OpenCV, based on TensorFlow. We are planning to integrate regressions to trace back objects and to predict their positions. We would also like the public to be aware that we are targeting a new paradigm called Artificial Collective Intelligence (A.C.I). It will complement the existing Artificial General Intelligence (A.G.I.) and Artificial Super Intelligence (A.S.I.).
Nov 1st – Nov 15th
We’re now implementing movement detection by linking the continuity of the objects across different frames of a video. This will allow to extrapolate the position of the object in any point in time.
Stay tuned for a video on the latest step from Philippe Bouchard on November 15th!
Oct 19th – Nov 1st
Having the skeleton built, we can now implement the inference of particular instances by reference to vectorization of static images.
So in other words, we’ll be able to deduce object types present in images, by converting the pixel-based ones into vector-based versions with every facet treated as a line or shape. (i.e. image shown on the bottom which on the right represents a pixel-based image, versus the left representing a vector-based version)
You might think this only applies to images, but this also include any sort of text, symbols, and other tiny details. So it doesn’t only apply to object recognition.
Oct 4th – Oct 18th
Now that we have that 5 year-old capable of understanding the mathematical language, we can start working on deduction, and its capability to extrapolate different analysis from various data.
Contrary to most models which are very specific at handling a task in particular, it will be able to take a general approach to various problems simultaneously, to get the bigger picture.
In fact, multiple industries/sectors could benefit from such a model, like Economy (predicting models), Stock markets, Medical field, Gaming, Fashion trends, Astrophysics, Science, Accounting, Architecture (drawings & conceptions), and more…
The of the A.S.I. is in development!
The next 2 weeks will be dedicated to integrate the mathematical language, so it can achieve the same level of comprehension of a 5 year old child.
Welcome to milestone 1!
We present the AI Powershift, which will be used to speed up your Artificial Intelligence algorithms and interpreted languages up to 10x using this innovative translator and your favorite compiler. This translator is generic thus working with all libraries and the most complex algotrithms.
This software will be used in our A.S.I. to maximize the hardware, accelerating the learning curve.