AI & Art with Cali Rezo (6): Conclusion & Final Notes

To end this series of articles on the AI & Art project I realized in collaboration with Cali Rezo, today I would like to give you some of my thoughts on this project and on AI in general, plus some info on the tools we used and on Cali’s upcoming events.

Applying AI in the real world…

This whole project was truly a source of learnings and reflexion: it taught me a lot about implementing AI models and about the difference between university project exercises and real-life projects. As Benjamin Brewster once said (although it is hard to know who actually said this first and many people have been cited on this quote):

In theory there is no difference between theory and practice, while in practice there is.

What is clean data?

To be honest, you don’t really get how neat and easy to work with the datasets you’re given in class are until you start working on “true” data, and this project showed me that it is already very difficult and time-consuming (for humans!) to have “okayish” data.

And this pre-processing phase is often forgotten or underestimated in articles relaying AI stuff; only the people who already know of the datasets presented in the research papers can get a sense of the potential of a discovery – the majority of the public is simply lead to believe that “the computer took care of everything and we now have incredible results!”. Of course, not all journalists work that way; but I still feel like a lot of AI communication is relying on how machine learning achieves amazing tasks all by itself while hiding the parts of the process that heavily depend on human work.

Machine Learning

As always, an xkcd strip is worth a million words (from: https://xkcd.com/1838/)!

Once again, this is no nitpicking: AI models only way of learning is through the data they are fed. It is their whole environment, their whole world, so whatever errors or biases are in your dataset will be reproduced by your model. There is only so much you can do with your model architecture: the rest is up to your dataset!

The problem of reproducibility

Another issue that several fields of science are currently facing is the problem of reproducibility. To put it simply, reproducibility concerns checking that if you do the same experiment two, three, ten times, you do get similar results that actually support your conclusions, and not a bunch of different results that are impossible to sum up.

This problem is not limited to machine learning: this article by T. Prévost (in French) relays various results from reproducibility studies that show most domains lack the ability to reproduce crucial experiments (be it in psychology, cancer research, economics…). But it does raise lots of question with AI, because it can be hard to decipher if the issue comes from the algorithm itself or from a human error. In truth, most of the time, it seems like it is most likely due to humans than to the program – after all, a machine is able to do precisely what it is asked… the problem is just that, sometimes, it is hard to ask our question well!

Due to the “black box” property of AI models, it can be difficult to understand how they come up with their predictions, and the fact that they (usually) don’t have any doubt about those is not very reassuring (see my recent article on AI models’ explainability and uncertainty for more insights on these topics). So, when you combine not-very-explainable tools with a fuzzy scientific method, results can be catastrophic in terms of reproducibility. Basically, you might end up with a providential amazing discovery that you sincerely believe is worth spreading to the world, and that everyone will be fascinated by… before they realize that they cannot actually redo your experiment and that you just got lucky.

Even worse, some scientists play with the statistics and with scientific methodology to boost their publication although it does not entirely meet the necessary criteria (for example, the infamous technique of p-hacking relies on a misuse of data analysis to find statistically relevant patterns that don’t truly exist). This can be partly charged against the terrible “publish or perish” rule and how AI research is often too quickly turned into marketable products, but it nonetheless comes for human errors.

While there is no perfect answer to this issue, as pointed out by T. Prévost in his article, establishing clear protocols, devoting time and money to reproducing the experiment and comparing to official benchmarks could help spot and sift at least part of the wrongly performed experiments.

Communicating about AI

Yesterday, I went to the exhibit “AI: More Than Human” that is currently being held at the Barbican Centre in London. Even though I appreciated the amount of work that was put into it, I was not completely convinced. In fact, I was a bit surprised by the angle they took: after some nice historical contextual setting, the rest of the exhibit is a more a showcasing of the most impressive feats that were accomplished through an AI than an actual explanation of how it works.

Disclaimer: the opinion that is given here is just the point of view my friends and I had on the exhibit; it is very personal and should not be taken at face value!

Photo at the Barbican, by Kathryn Brimblecombe-Fox (see her blog post about the title of the exhibit and our current use of semantics on topics related to new technologies: https://kathrynbrimblecombeart.blogspot.com/2019/04/)

So, during almost two hours, as I was walking around the exhibit, I actually realized I wasn’t reflecting upon what I was seeing but upon how they were sharing knowledge and communicating about AI to the public.

Yes, the exhibit does explain that AI is now everywhere in our societies and that it will definitely shape the (near) future; they are not misleadingly hiding the bad stuff and pretending that machine learning is completely awesome. Huh, as a matter of fact, now that I think about it, they do dedicate a whole part to frighten us with the terrific usages we could find for AI. Yup, they do spend a certain amount of time repeating that robots could be our doom…

But I’m still wondering exactly what audience they are targeting since, as it is often the case with AI and public communication, I feel like they are stuck between a rock and a hard place: on the one hand, they want to state the impact AI has on our lives and to make us understand it is important to see where it came from and how it evolved, and on the other hand they simply try to amaze us with uncanny robots or interactive games where you talk to a chatbot – which is supposed to be one of the best in the world, apparently – or guess the next letter in a sentence and see how well a machine learning algorithm would perform on this same task. To me, it’s hard to decipher if they are addressing:

  • people who don’t know anything about AI and just want to discover it, in which case it is an intense and compact hour and half filled with way too many facts to see everything and, in my opinion, too little on the actual problems there currently are around data management
  • or people who are already aware of what it is and can appreciate its advantages and drawbacks, in which case you are stormed through a dense set of screens, Japanese robots and nice achievements that you already know of (like Google’s “AlphaGo” or IBM’s “Watson”), with a few tech/sci-fi movies or books you’ve probably already seen or read (for example Mary Shelley’s Frankensteinthe original Frankenstein movieThe Imitation Game, Blade Runner…) to make it more pleasant

I don’t mean to diminish the exhibit: I’m quite sure that, as a data scientist pondering upon the limits of the field and its potential bad consequences, I was not the best client. But I still have this impression that they wanted to pack as many thrilling examples as possible in a somewhat small space and that their goal was to blow your mind rather than truly teach you about AI.

Also, I was kind of weirded out by the fact that all the interactive parts of the exhibit are clearly a way of having the public feed an AI model with a huge database, for free – well, actually, for more than £10. They do put a disclaimer, of course, saying that your picture will be used in a database, for the first interactive terminal… but then nothing explicitly states that when you are swiping “good” and “bad” photos on the second terminal a few meters away, you are training an AI by feeding it hundred of thousands of new inputs everyday. Nor do they talk about this on the third terminal where you sort words in categories, or the fourth where you give a mark for a comment to represent your emotion reading it. I’m certain they put a sign somewhere… but be it because of the crowd or because of the fact I was at an exhibit where I just wanted to relax, I did not look for it. And there was no other warning when accessing the terminals, as far as I could see. I didn’t participate in these activities myself, but for me it begs the question: did the guy I watch complete sentences and compare his answers with an AI pay £15 to work as a data scientist?

I’m not saying that communicating about AI is easy! Like in many fields, having experts popularizing the important concepts is difficult and medias are often forced to relay complex and multi-sourced info very quickly – given how fast this field is growing. But be it Google that makes us label data (through its partner company Figure Eight or CAPTCHAs) or Amazon that “employs” anyone for a ridiculous salary to tag datasets (with its Amazon Mechanical Turk website), we need to remember that living at the era of data does not mean we should accept everything about how it’s handled – and we should know when we are actually working for the GAFAM… (as usual, the french documentary series Cash Investigation did quite an interesting job in uncovering some of those questionable workflows).

Final note on this exhibit: to be honest, we seem to not be the only ones to feel this way about it. Jonathan Jones from the Guardian also commented on the information overload and the issue of mixing facts and fiction so closely together – he even reveals that the last piece of the exhibit, the immersive installation “What a Loving and Beautiful World” by the teamLab art collective (see the picture below), which is quite a nice thing on its own, was not truly realized with AI. So, the exhibit’s conclusions appear to be that we are afraid of those lifelike robots and the uncannily human-like faces but that we can learn to make peace with AI… by not using it? Strange.

“AI: More than Human” keeps stating that we are a few years away from the future, but in truth it lacks to use this technology to produce anything “better than humans”. It was interesting to see what they show of the state of AI and art – so far, we are from for ever reaching human skills with machine learning and, as Cali was saying in the interview last week, AI can copy pretty well but not invent by itself. In my opinion, the “art” produced by the AIs were not very impressive and was more a proof that we still have a long way to go before any machine can make a credible piece of artwork.

“What a Loving and Beautiful World”, by the teamLab art collective (photo by Tristan Fewings, Getty Images)

Rosemary Waugh from Timeout, too, was disappointed by her experience. Same critics: the exhibit doesn’t actually teach you anything because it is more interested in bragging about robots and you end up wondering: “sure, that’s cool, but why do I care?”.

I guess we are still waiting for a really cool AI exhibit that tries to go beyond the “wow effect” and talk about machine learning in an understandable and enjoyable way…

TensorFlow: Google’s open-source tool for AI

First released in 2015, TensorFlow (TF) is a tool that Google offered the community to create, train and study AI models in depth. It is now one of AI engineers’ favorite tools. It (mainly) has a Python API (but there is also a C one and, thanks to the community, multiple other languages are now available) that provides you with a wide range of resources both for machine learning applications in production and for research in the field.

I don’t want to do a TF tutorial in this article; instead, I’d rather talk of two (technical) topics I’ve pondered upon when I used this tool for the project and at work recently: how easy to learn is TensorFlow and what is this new TensorFlow 2.0 about?

Plain TensorFlow or Keras?

Sometimes, people criticize TF for having quite a steep learning curve (plus, in my opinion, not the best documentation around, sadly…). The first time you dive in the tool, you may feel it’s way too much and you’re going to have a hard time putting everything together. After some searching on the net, you might fall on a possible alternative: Keras.

The Keras interface is a way of building complex architectures in only a few lines of code: it is a high level API built on top of TF that abstracts most of the fine-tuning difficulties and helps you prototype your model in a quick and easy way. For example, this interface provides a basic Sequential wrapper to automatically stack various layers into a complete model – and the layers themselves are so intuitive to use that it makes it a piece of cake to implement a given architecture in a dozen of minutes.

But this user-friendliness comes, of course, with a downside: Keras is not flexible. It lacks the possibility to define a completely new and custom architecture from thin air.  Moreover, Keras doesn’t allow one to use the full scope of TF features: things like threading, queuing and debugging are not available in this interface. To put it simply: TF gives you more control over your network; and more control is usually good in AI, given how confusing the constructs can be… This is why many AI engineers stick with TF itself.

For this project, I didn’t go with Keras, because I wanted full control over my models. In many cases, I drew inspiration from Siraj Raval’s Github repositories to implement the models. I already mentioned him in a previous post: Siraj is an AI researcher and also the founder of the School of AI; I discovered his YouTube channel a few months ago and I do believe his videos are a good introduction to plenty of AI or data science concepts.

From my own experience, though, mastering both Keras and TensorFlow is a plus for data science job interviews: when you are asked to draft some model architecture on a piece of paper, you can often refer to Keras’ philosophy and core structures to help get you started. And, if you have to do some technical task in a limited amount of time, Keras can be a neat way of designing some ML models quickly to get early results.

Tensorflow 2.0: A big mind frame-shift!

Earlier this year, TensorFlow announced that a big update was in the starting blocks for TF. The Alpha version was released in March (along with a roadmap) and, since then, the community has reacted to this huge change: because, far from bringing just some fixes or new components, this 2.0 is truly a new start for TF with, among other things, a rethinking of some core ideas of the old tool.

As stated on TF’s website: “[TensorFlow 2.0] is a significant milestone and a major new release, with a focus on ease of use and simplification”. While this sounds nice, this might be a lot trickier for the ones who were already developing with the tool before and are used to some features.

In particular, Google told the community that the new version would work way more tightly with Keras than the previous one. TF 2.0 also removes one of the most emblematic feature of TensorFlow: the model graphs and sessions, by focusing on eager execution and changing the way variables are treated. Some of the API will also be modified or moved.

Quick note: in spite of all the great things about TensorFlow, I’ve always been bugged by the fact that many updates introduced breaking changes that forced you to either manually downgrade back to the earlier TF or patch your entire codebase to follow the API changes.

Finally, Google decided to change the way they handle open-source contributions: after seeing how the tf.contrib online repository has evolved into a powerful but hard-to-organize experimental playground, they now prefer to close this direct access and rather have a team (a “special interest group”) dedicated to handling the old contributions move and the new contributions addition in a specific thread. To me, this is good news and might solve of the documentation issues that was slowly arising from having so many input points for new features.

Even if I haven’t yet had the opportunity to compare TensorFlow 1.x and 2.0 in depth, there is no doubt that this new version has some interesting ideas and is worth looking at if you’re working in this field… but that it will imply re-learning some new habits, too!

Cali’s news and upcoming events

If you want to check out more of Cali’s artwork, you can go to her website. She is also represented by 3 gallerists all over Europe.

The last Saturday of June, the 29th, Cali will open her studio in Paris starting from 2pm: don’t hesitate to drop by to admire her gorgeous abstract paintings in their natural environment!

References
  1. Cali Rezo’s website: http://www.calirezo.com/site2015/
  2. TensorFlow’s website: https://www.tensorflow.org/
  3. TensorFlow 2.0 Roadmap: https://www.tensorflow.org/community/roadmap
  4. Keras’ documentation: https://keras.io/
  5. The School of AI’s website: https://www.theschool.ai/
  6. Siraj Raval’s Youtube channel and Github profile: https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A, https://github.com/llSourcell
  7. “AI: More than Human” exhibit at the Barbican Centre (London): https://www.barbican.org.uk/whats-on/2019/event/ai-more-than-human
  8. Google’s “AlphaGo”: https://deepmind.com/research/alphago/
  9. IBM’s “Watson”: https://www.ibm.com/watson
  10. teamLab art collective’s website: https://www.teamlab.art/
  11. Figure Eight’s website: https://www.figure-eight.com/
  12. Amazon Mechanical Turk’s website: https://www.mturk.com/
  13. Cash Investigation documentary: https://www.france.tv/france-2/cash-investigation/1066737-au-secours-mon-patron-est-un-algorithme.html
  14. T. Prévost, “Intelligence artificielle, reproductibilité et «boîte noire»: un chaos scientifique” (https://korii.slate.fr/tech/intelligence-artificielle-machine-learning-crise-reproductibilite-boite-noire-science), May 2019. [Online; last access 26-May-2019].
  15. I. Wikimedia Foundation, “Data dredging” (https://en.wikipedia.org/wiki/Data_dredging), May 2019. [Online; last access 26-May-2019].
  16. J. Jones, “‘I’ve seen more self-aware ants!’ AI: More Than Human – review” (https://www.theguardian.com/artanddesign/2019/may/15/ai-more-than-human-review-barbican-artificial-intelligence), May 2019. [Online; last access 27-May-2019].
  17. R. Waugh, “AI: More Than Human review” (https://www.timeout.com/london/museums/ai-more-than-human-review), May 2019. [Online; last access 27-May-2019].
  18. A. Nain, “TensorFlow or Keras? Which one should I learn?” (https://medium.com/implodinggradients/tensorflow-or-keras-which-one-should-i-learn-5dd7fa3f9ca0), May 2017. [Online; last access 23-May-2019].
  19. TensorFlow, “What’s coming in TensorFlow 2.0” (https://medium.com/tensorflow/whats-coming-in-tensorflow-2-0-d3663832e9b8), January 2019. [Online; last access 23-May-2019].
  20. Frankenstein (1818) written by Mary Shelley: https://en.wikipedia.org/wiki/Frankenstein
  21. Frankenstein (1931) directed by James Whale: https://en.wikipedia.org/wiki/Frankenstein_(1931_film)
  22. The Imitation Game (2014) directed by Morten Tyldum: https://en.wikipedia.org/wiki/The_Imitation_Game
  23. Blade Runner (1982) directed by Ridley Scott: https://en.wikipedia.org/wiki/Blade_Runner

Leave a Reply

Your email address will not be published. Required fields are marked *