Get the "Applied Data Science Edge"!

The ViralML School

Fundamental Market Analysis with Python - Find Your Own Answers On What Is Going on in the Financial Markets

Web Work

Python Web Work - Prototyping Guide for Maker

Use HTML5 Templates, Serve Dynamic Content, Build Machine Learning Web Apps, Grow Audiences & Conquer the World!

Hot off the Press!

The Little Book of Fundamental Market Indicators

My New Book: "The Little Book of Fundamental Analysis: Hands-On Market Analysis with Python" is Out!

The Day We Can Throw Bad AI in Jail, Is the Day We Have Achieved True AI

Introduction

What will happen when an AI will be entirely blamed for its actions? Very interesting article on Bloomberg Who to Sue When a Robot Loses Your Fortune. It brings up important points that we will all have to contend with soon! Well have truly entered the age of AI when we can throw that AI in jail for bad behavior. MORE: Blog or code: http://www.viralml.com/video-content.html?fm=yt&v=hamRYC5VFsI Signup for my newsletter and more: http://www.viralml.com Connect on Twitter: https://twitter.com/amunategui My books on Amazon: The Little Book of Fundamental Indicators: Hands-On Market Analysis with Python: Find Your Market Bearings with Python, Jupyter Notebooks, and Freely Available Data: https://amzn.to/2DERG3d Monetizing Machine Learning: Quickly Turn Python ML Ideas into Web Applications on the Serverless Cloud: https://amzn.to/2PV3GCV Grow Your Web Brand, Visibility & Traffic Organically: 5 Years of amunategui.github.Io and the Lessons I Learned from Growing My Online Community from the Ground Up: https://amzn.to/2JDEU91 Fringe Tactics - Finding Motivation in Unusual Places: Alternative Ways of Coaxing Motivation Using Raw Inspiration, Fear, and In-Your-Face Logic https://amzn.to/2DYWQas Create Income Streams with Online Classes: Design Classes That Generate Long-Term Revenue: https://amzn.to/2VToEHK Defense Against The Dark Digital Attacks: How to Protect Your Identity and Workflow in 2019: https://amzn.to/2Jw1AYS Source: https://www.bloomberg.com/news/videos/2019-05-06/who-to-sue-when-a-robot-loses-your-fortune-video The story itself is entertaining, and let me give you the 30 seconds low-down of what happened. Rich guy, Samathur Li Kin-kan, son of the billionaire Samuel Tak Lee from Hong Kong who owns a lot of property all over the world including HK and London is suing Raffaele Costa an Italian (picture) who has made a career of selling investments including the one regarding this case in which he got Samathur to invest 2.5 billion—$250 million of his own money in his AI fund called Tyndaris Investment Costa showed simulated backtests of the AI fund making double-digit returns and that cinched the deal. Raffaele Costa is now being sued for the magic sum of $23 million for allegedly exaggerating what the supercomputer could do. The robot in question is probably an ML model smart enough to retrain itself regularly... definitely not true AI... Remember the trolly problem? Thats when a human being, not an AI, has to decide whether to divert the trolly from its course to save 5 people and only kill one or do nothing. Either way, you are screwed because there is no good outcome AI will contend with these type of decisions, and it wont be binary - it will contend with choices in the quadrillions of choices. That day, it wont be so easy to say well, its the engineer or scientists fault, not the AI. Not like the articles case, its just one human trying to rip off the other, no AI mystery. Just like a hammer maker isnt guilty if I buy that hammer and start smashing people in the head with it, right? We may try to limit AI by not letting it makes this kind of ethical judgments, like if the choice after a fallen tree blocks the road, and the only two options to avoid it in time are to turn right kill a child, or turn left and kill an old man, we may add a third option to blow itself up - yes - all occupants of the vehicle will sign a release of responsibility and the fine print will state that this car may kill you - Just like I have to do with my kids when I take them to the trampoline place - if they get injured or die, its on me. The Article does mention the death caused by an Uber self-driven car last year and Ubers was cleared of criminal intent. Meaning they didnt fault the AI. But the salient point here, when thinking AI and ethics, we often talk about life and death choices, and focus on car crashes or the loss of millions of dollars - but what if you had real AI capabilities in the form of a selling agent - and you told it. to hit certain numbers of sales, what if it started lying or manipulating just enough to get that sale and avoid being sued? When does the AI become the purchaser of that hammer? Anybody whos worked with deep neural networks knows how opaque and impossible it is to understand why it does what it does. It can learn to win games but watching every pixel of a screen, then blow those features with feature engineering into of additional trillions of features and it wins those games! Good luck analyzing why it did something like losing somebodys money or killing somebody - it could very well be a trolley problem where the AI saw more choices than meet the eye, like if it didnt pull the money out, it wouldve caused a catastrophic stock market crash on February 14th and the firm would have lost everything, not just 20 million. CATEGORY:StateOfIndustry



If you liked it, please share it:

Code

If there is code for this video, please refer to YouTube notes

Show Notes

(pardon typos and formatting -
these are the notes I use to make the videos)

What will happen when an AI will be entirely blamed for its actions? Very interesting article on Bloomberg Who to Sue When a Robot Loses Your Fortune. It brings up important points that we will all have to contend with soon! Well have truly entered the age of AI when we can throw that AI in jail for bad behavior. MORE: Blog or code: http://www.viralml.com/video-content.html?fm=yt&v=hamRYC5VFsI Signup for my newsletter and more: http://www.viralml.com Connect on Twitter: https://twitter.com/amunategui My books on Amazon: The Little Book of Fundamental Indicators: Hands-On Market Analysis with Python: Find Your Market Bearings with Python, Jupyter Notebooks, and Freely Available Data: https://amzn.to/2DERG3d Monetizing Machine Learning: Quickly Turn Python ML Ideas into Web Applications on the Serverless Cloud: https://amzn.to/2PV3GCV Grow Your Web Brand, Visibility & Traffic Organically: 5 Years of amunategui.github.Io and the Lessons I Learned from Growing My Online Community from the Ground Up: https://amzn.to/2JDEU91 Fringe Tactics - Finding Motivation in Unusual Places: Alternative Ways of Coaxing Motivation Using Raw Inspiration, Fear, and In-Your-Face Logic https://amzn.to/2DYWQas Create Income Streams with Online Classes: Design Classes That Generate Long-Term Revenue: https://amzn.to/2VToEHK Defense Against The Dark Digital Attacks: How to Protect Your Identity and Workflow in 2019: https://amzn.to/2Jw1AYS Source: https://www.bloomberg.com/news/videos/2019-05-06/who-to-sue-when-a-robot-loses-your-fortune-video The story itself is entertaining, and let me give you the 30 seconds low-down of what happened. Rich guy, Samathur Li Kin-kan, son of the billionaire Samuel Tak Lee from Hong Kong who owns a lot of property all over the world including HK and London is suing Raffaele Costa an Italian (picture) who has made a career of selling investments including the one regarding this case in which he got Samathur to invest 2.5 billion—$250 million of his own money in his AI fund called Tyndaris Investment Costa showed simulated backtests of the AI fund making double-digit returns and that cinched the deal. Raffaele Costa is now being sued for the magic sum of $23 million for allegedly exaggerating what the supercomputer could do. The robot in question is probably an ML model smart enough to retrain itself regularly... definitely not true AI... Remember the trolly problem? Thats when a human being, not an AI, has to decide whether to divert the trolly from its course to save 5 people and only kill one or do nothing. Either way, you are screwed because there is no good outcome AI will contend with these type of decisions, and it wont be binary - it will contend with choices in the quadrillions of choices. That day, it wont be so easy to say well, its the engineer or scientists fault, not the AI. Not like the articles case, its just one human trying to rip off the other, no AI mystery. Just like a hammer maker isnt guilty if I buy that hammer and start smashing people in the head with it, right? We may try to limit AI by not letting it makes this kind of ethical judgments, like if the choice after a fallen tree blocks the road, and the only two options to avoid it in time are to turn right kill a child, or turn left and kill an old man, we may add a third option to blow itself up - yes - all occupants of the vehicle will sign a release of responsibility and the fine print will state that this car may kill you - Just like I have to do with my kids when I take them to the trampoline place - if they get injured or die, its on me. The Article does mention the death caused by an Uber self-driven car last year and Ubers was cleared of criminal intent. Meaning they didnt fault the AI. But the salient point here, when thinking AI and ethics, we often talk about life and death choices, and focus on car crashes or the loss of millions of dollars - but what if you had real AI capabilities in the form of a selling agent - and you told it. to hit certain numbers of sales, what if it started lying or manipulating just enough to get that sale and avoid being sued? When does the AI become the purchaser of that hammer? Anybody whos worked with deep neural networks knows how opaque and impossible it is to understand why it does what it does. It can learn to win games but watching every pixel of a screen, then blow those features with feature engineering into of additional trillions of features and it wins those games! Good luck analyzing why it did something like losing somebodys money or killing somebody - it could very well be a trolley problem where the AI saw more choices than meet the eye, like if it didnt pull the money out, it wouldve caused a catastrophic stock market crash on February 14th and the firm would have lost everything, not just 20 million. CATEGORY:StateOfIndustry