<p class="postguide">Why Aren't People Truthful?</p>


The ‘Next Big Future’ writes “The future of work is you with a computer, not you replaced by a computer.”

Really?

While I appreciate this kind of argument, I do feel that we continue to not be honest with what is going on. If someone’s job can be 50% automated – then they are only needed half time. And … if I was a corporation, where the raison d’être is to maximise shareholder value, I would also know that people costs are the highest costs I have.

So, if I can automate 50% of someone’s role – I have a choice

Continue reading

            <p class="postguide">Why Would Forrester's CEO Do This?</p>


Here’s the link – don’t bother with the click through. It opens …

It needs the company’s AI software Watson in its fight against Google, Facebook, Microsoft, and Amazon.

because apparently

Apple is in a death match …

Well I – and many others – would disagree.
Continue reading

            I need to remember who wrote this - it appeared in my email one day. It resonated.

Big data and machine learning (is) all oriented to one type of intelligence or a western view of intelligence. Mimic the brain, no heart. Instinct defined via algorithm. Maybe we struggle against this because deep inside we know its profoundly dysfunctional.

The more I read things like this, the more I keep coming back to Jim Woessner’s Box Poem.

            <a href="http://beyondbridges.net/wp-content/uploads/2015/06/burning-bridge.jpg"><img class="aligncenter size-full wp-image-4373" src="http://beyondbridges.net/wp-content/uploads/2015/06/burning-bridge.jpg" alt="burning-bridge" width="674" height="124" /></a>It all started with a group email that I received .... ....

“I found this list of 2015 Project Grants in AI interesting, not least because of the VRM angle some of the projects might have.”

.. and then provided a link to a pile of people who all have grants in the world of AI.

Then another person followed through that link – and reads one of the bios and synopses ( I assume some – since there were a lot – and none of us have time to read everything. But one thing she did was extract this (commenting about how worrisome this is)

Humans take great pride in being the only creatures who make moral judgments, even though their moral judgments often suffer from serious flaws. Some AI systems do generate decisions based on their consequences, but consequences are not all there is to morality. Moral judgments are also affected by rights (such as privacy), roles (such as in families), past actions (such as promises), motives and intentions, and other morally relevant features. These diverse factors have not yet been built into AI systems. Our goal is to do just that. Our team plans to combine methods from computer science, philosophy, and psychology in order to construct an AI system that is capable of making plausible moral judgments and decisions in realistic scenarios. ….

And almost by return came …

Not to worry. Their AI said this is all perfectly OK.