AI and Machine Learning Implementation Workshop #suglobalsummit

Neil Jacobstein gave a workshop on AI and Machine Learning Implementation at the 2019 Singularity Summit.

It turns out there is a fatal flaw in most companies approaches to machine learning, the analytical tool of the future. 85% of the projects do not get past the experimental phase and so never make production (Forbes, Enrique Dans, Jul 21, 2019.)

Run real projects and do not run pilot projects that cannot fail. Real experiments can fail. They will teach you something if they can fail.

Not all projects can be solved data.

AI Implementation Recommendations

1. Start with a problem, not the technology and improve the business case if solved
2. Determine the roles for AI and humans, AI alone, augmentation, hybrid
3. Identify data required, as well as acquisition and maintenance plans
4. Select the appropriate machine learning platform
5. Select a hardware schema that scales – mobile to cloud
6. Test real data from users and keep evolving test cases as things change
7. Design simple interfaces – minimize changes in behavior or existing workflow
8. Develop performance metrics for the problem and for the business
9. Design in AI safeguards, system security and exception cases.

7 thoughts on “AI and Machine Learning Implementation Workshop #suglobalsummit”

  1. You know, for me, going to College in the United States presented a certain problem, since I am not a native English speaker and my spelling has some errors. I certainly would be happy to perform self-writing essays, but after consulting on social networks, I was offered help and advised a good written service: https://www.superiorpapers.com/ . I am very happy with their services and I think I will only apply to them.

    Reply
  2. Two answers spring to mind, which aren’t mutually exclusive:

    1. They were too optimistic with their guess as to what was required to optimise such a problem space.
    2. They were using a lesser value of “optimise”.
    Reply
  3. A short story…

    35 years ago, I was selling high-end PCs to profs, grad students, and post-docs. Had a reputation for never cheating or overselling.

    The UCBerkeley A.I. group took a shine to us. Unlike the others, they had at-the-time-outlandish requirements. 10× the memory. First adopters of multiple processors in a single enclosure. Huge custom-made rotating disk caches. Page swap memory well-and-above the limits of MSDOS or Unix (at the time). BSD UNIX, booted from custom-made 9-track full-sized tape controllers and ¾ inch tape.

    I and they had many discussions about where AI was going to need in memory, bandwidth, processing power, special as-yet-not-invented coprocessors, disk arrays…

    Basically, “finding the optimization for 100 manufacturing plants, 10,000 customers, with varying-over-week, month, season, and gaussian-lifetime of product mix” was considered amongst the hardest, depending more on AI than on strict combinatorial work.

    They agreed, “100 to 10,000 times the CPU, a few gigabytes of RAM, a few hundred gigabytes of disk space, and solid power supplies, adequate cooling”.  

    Seemed awesome.

    Now that fits on my desktop, for $3000.
    Still … the same problems are given to be outside my shell’s abilities. 

    I wonder why?
    Just saying,
    GoatGuy ✓

    Reply

Leave a Comment