Tag Archives: Artificial Superintelligence

AI Safety in Cyber Security | AI Decision Making | Wireheading | AI Chatbot Privacy – with Roman Yampolskiy

This episode is sponsored by the CIO Scoreboard

My guest for the most recent episode was an AI expert Roman Yampolskiy. While listening to our conversation, you will fine-tune your understanding of AI from a safety perspective. Those of you who have decision- making authority in the IT Security world will appreciate Roman’s viewpoint on AI Safety.

Major Take-Aways From This Episode:

1) Wire heading or Mental Illness with Machines – Miss aligned objectives/incentives for example what happens when a sales rep is told to sell more new customers, but ignores profits. Now you have more customers but less profit. Or you tell your reps to sell more products and possibly forsake the long term relationship value of the customer. There are all sorts of misaligned incentives and Roman makes this point with AIs.
2) I can even draw a parallel with coaching my girls’ teams where I have incented them to combine off each other because I want this type of behavior. This can also go against you because you end up becoming really good at passing but not scoring goals to win.
3) AI Decision making: The need for AIs to be able to explain themselves and how they arrived at their decisions.
4) The IT Security implications of AI Chat bots and Social Engineering attacks.
5) The real danger of Human Level AGI Artificial General intelligence.
6) How will we communicate with systems that are smarter than us? We already have a hard time communicating with dogs, for example, how will this work out with AIs and humans?
7) Why you can’t wait to develop AI safety mechanisms until there is a problem…..We should remember that seat belts were a good idea the day the first car was driven down the road, but weren’t mandated till 60 years after…
8) The difference between AI safety and Cybersecurity. Continue reading