Print this Post

How Big Data Will Make The World Unrecognizable, Part 10

In part two of the series we talk about how IoT devices can predict the future and combine with the cloud to “learn” using artificial intelligence.

If you missed it, you need to watch part one first.


Son of a ####, are you kidding me?!  Math?  OK, this is a simple example, but it proves that with the right amount and types of data, we can predict the future.  This camera knows a few things about its surroundings.  It knows the height at which it is installed and its geographical location.  It knows its orientation in space.  Using a little trigonometry the camera can determine the geographical location, direction and speed of the objects it can see.

For our example of predicting the future we will assume that a nice blue haired senior starts driving towards the building and at some point mistakes the accelerator for the brake pedal.  Now grandma isn’t no ordinary grandma, she’s driving a top fuel dragster that can accelerate very quickly.  Grandma and her dragster quickly accelerate towards the building.  If you know the maximum limit of the distance it would take for a car to stop and limit in how fast it can corner, at some point it becomes impossible to avoid hitting the building, mathematically speaking.  Top fuel drag cars can accelerate very quickly but they can’t stop as fast as they accelerate, and lets say that these cameras have an algorithm with a average value of how quickly a car can decelerate or turn.  In our example we might know seconds before the car hits the building that it is not possible for the car to stop or avoid hitting the building.  Through just a few data points, we’ve predicted the future.  The camera could have an ambulance on the way BEFORE the event happens. 

My point in mentioning this example is so you understand that given enough data many different types of events can be predicted.  In our retail store that offers cheep chicken Mondays, the camera on the parking lot converged with the cloud can begin predicting who is coming in for chicken.  A car pulls into the lot and matches the type that typically buys chicken.  It parks somewhere in view of the sign, the camera determines that the person walking out of their car is a middle aged male who is hungry.  The person enters the store and walks towards the section with the chicken.  From the moment this car pulls in the probability of selling chicken increases as more data is collected.  As more data is collected over time, predicting next week’s chicken sales becomes possible.  Things get VERY interesting when that camera isn’t the camera on the retail establishment that sells the cheep chicken, but in the hardware store across town, more on that later. 

Shifting gears, I want you to take another look at the view from our original security camera example.  This time I want you to look at this picture in a different light.  No longer think of it as a security camera.  Look at this picture from a business perspective and think whatever you’d like, use whatever information you can gather from this image.  For example, a developer may see that even though this area gets high traffic, over time a certain section of the parking lot may never get used and might be a prime location for the next Applebee’s.  A local car dealer may notice that silver is the most popular color car for this geographic area and stock more silver cars.  The owner of the mall might use car counts to entice a new tenant and a criminal with fraudulent access to the data notices certain cars are unattended for hours at a time every Monday night.  The possibilities for actionable intelligence are virtually limitless if you can look at the data in the right manner.  And this this poses a few issues.

In order to predict the future, and pay attention because the next statement is a bit more profound, in order to manipulate the future, we have to be able to understand the data we are looking at.  Our security camera generates data that originates in the form of an image much like the human eye sees things.  When we see another human being, our brain uses information conveyed from our eyes to recognize that what we are seeing is a person.  The object we see has a certain aspect ration, it has two arms and two legs, a head at the top with two eyes, a nose and mouth.  A car has a certain size and four wheels, or more accurately two, because of the perspective.  For a computer to determine these things it has to be taught what a car or a human looks like.  In the simplest terms, a human being could write an algorithm that looks for these attributes in an image, but could you imagine how much effort would have to go into writing algorithms to recognize the things you determined by looking at the security camera footage?

The answer is to teach the device to learn how to write it’s own algorithms, or what could be called artificial intelligence or machine learning.  Instead of a person interpreting the way a camera sees a person, you write an algorithm that teaches the camera to observe different aspects of what it is seeing and look for patterns in that data.

Our security camera spends its time watching cars and people come into the parking lot.  Most of the time, that is all that it sees.  But one day, a little kitty cat walks into the field of view.  It is comprised of less pixels than a human, has a different aspect ratio because it walks on all fours, is a different color, and so on.  The algorithms in the camera say, “I’ve seen something, I don’t know what it is, but here is the data that I collected from analyzing this group of pixels” and remembers the unique pattern that this kitty cat represents.  The next day, the same kitty walks into the parking lot again.  The algorithms notice that the patterns match the object it saw the day before and it can be deduced that with a high degree of probability this is the same object.  Remember, the camera has limited computing power.  It doesn’t know what a cat is, and frankly, it probably doesn’t need to know as it sends the collected information about the new found object to the cloud.  The cloud, as mentioned, has immense computing power AND a connection to database of all the images on the internet.  The cloud is able to compare these images to the one captured by the camera and determine it is an orange tabby approximately 2 years old and send that determination back to the camera, and every other camera connected to the system, and from now on, those patterns are known as “CAT: orange tabby”.  Furthermore, the cloud also sends the camera known information about cats and their behaviors as relates to what the camera can see.  One of those bits of information happens to be the maximum velocity a cat can travel.  Why?  Because that is one of the factors a camera can observe as we mentioned earlier. 

The cat shows up the third day, the camera recognizes it as “CAT: Orange Tabby” and sends that information to the cloud.  Algorithms in the cloud have determined a cat is something that shouldn’t be in the parking lot and sends a message to animal control.  Right at this moment grandma is pulling out of the parking space in her top fuel dragster and mistakes the gas for the brake and, you may have guessed it, the camera determines the cat is done for, sends that info to the cloud, and the cloud amends the message to animal control with “bring a shovel”, all before the event actually occurs.

Predicting the future using current and historical data is a very interesting topic.  I’ve shown you two occurrences now, one of which involves a machine providing itself with data that was necessary to predict the future.  In reality, all of this is done with a certain degree of probability, nothing is for certain.  For the machines predicting the future, they have a finite number of parameters they use to predict.  Up until the moment the cat was dead, it was possible for any number of unknown events to occur that would have avoided the death of the cat.  A small plane could have fallen out of the sky crashed into the side of grandma’s dragster, after the system determined the cat would be hit.  This plane could have imparted enough force on the dragster to push it out of the way and the cats maximum velocity running away from the dragster might have been enough to save its life.  This event has an ever so miniscule probability of happening, but it could.  The point I want you to take away is that these predictions, be it cheep chicken or dead cats, are all probability based and these predictions can only be made from available data.

At this point we are way into this series and we haven’t yet stated why ALL of this exists.  To state the obvious, these systems exist to effectively alter human behavior to make money, period.  You have to keep in mind that there are both positive and negative effects on making money.  The system knows that putting a sign out for Cheep chicken will positively affect sales by causing a human being who was stopping in the plaza to go a different store, to stop in the grocery store for some cheep chicken.  The system also knows that certain negative occurrences, like dead cats and out of control dragsters, can deter people from coming altogether and negatively affect sales.

So on day four when the system recognizes grandma and her dragster, part of the information that is determined is the probability that grandma can cause a negative effect on overall sales to the store.  The fact that grandma is assigned a negative factor is consequential, but everyone is assigned a value as to both their negative and positive effects on sales.  When another cat appears in the parking lot at the same time, the probability of a negative event occurring rises. 

Up until this point, we’ve talked essentially about data inputting into the system.  Now we will talk about how this system might use outputs to manipulate the future.

The fact that granny is in the store and the cat is in the parking lot increases the probability that a negative event might occur as it did yesterday, which is past data.  The systems sole existence is to positively affect sales and the probability that this event can occur is just one of thousands of possible events the system is looking at in this store at the moment.  The system has many options available, each with a consequence and a probability of changing the course of future events. Each of these actions has a further effect on other events happening.  In an extreme case, last year, a gun fight erupted in the parking lot and the system locked the doors and used the stores intercom to warn shoppers away from the front of the store.  There were no sales for the next 45 minutes.  That action would be too much, after all,  its only a cat.  Doing nothing would not change the probability of grandma killing the cat.  The system decides during checkout to give Grandma a $1 off coupon for Cheep Chicken.  The system alerted animal control the moment the cat was identified and they are on their way, but it will be another three minutes.  Grandma looks at the coupon, and heads to the Cheep chicken.  It takes her four minutes to get back to her car.  The system, has manipulated the future.

Up until this point we’ve talked about how big data, the internet of things, the cloud, Artificial Intelligence and fast data can affect the future in just one grocery store, but there is a whole world around this store, and in reality, the data being collected and actions being taken aren’t happening in just this store, but across the world.  In the next part, we broaden the scope, and take a look at these implications as well as the economic, societal, and political ramifications

Permanent link to this article: http://tinhatranch.com/how-big-data-will-make-the-world-unrecognizable-part-10/