While I appreciate this kind of argument, I do feel that we continue to not be honest with what is going on. If someone’s job can be 50% automated – then they are only needed half time. And … if I was a corporation, where the raison d’être is to maximise shareholder value, I would also know that people costs are the highest costs I have.
So, if I can automate 50% of someone’s role – I have a choice
I need to remember who wrote this – it appeared in my email one day. It resonated.
Big data and machine learning (is) all oriented to one type of intelligence or a western view of intelligence. Mimic the brain, no heart. Instinct defined via algorithm. Maybe we struggle against this because deep inside we know its profoundly dysfunctional.
Then another person followed through that link – and reads one of the bios and synopses ( I assume some – since there were a lot – and none of us have time to read everything. But one thing she did was extract this (commenting about how worrisome this is)
Humans take great pride in being the only creatures who make moral judgments, even though their moral judgments often suffer from serious flaws. Some AI systems do generate decisions based on their consequences, but consequences are not all there is to morality. Moral judgments are also affected by rights (such as privacy), roles (such as in families), past actions (such as promises), motives and intentions, and other morally relevant features. These diverse factors have not yet been built into AI systems. Our goal is to do just that. Our team plans to combine methods from computer science, philosophy, and psychology in order to construct an AI system that is capable of making plausible moral judgments and decisions in realistic scenarios. ….
And almost by return came …
Not to worry. Their AI said this is all perfectly OK.