Can we really support the use of AI?

Daniel Sønstevold has thought about safety, transparency, solidarity, quality and sustainability in connection with AI, and asks in this blog post: Can we really support the use of AI? Sønstevold is M365 Strategist and Generativ AI Evangelist and consultant in EVIDI.

Daniel Sønstevold

Daniel Sønstevold er M365 Strateg og Generativ AI Evangelist og konsulent i EVIDI.

The undersigned and others are vying to tap Copilot and AI into the ears of anyone willing to listen. How much do we think about the scale of going to use the services?

Principles and quality

Microsoft has its Six Principles of Responsible Artificial Intelligence but the principles are mostly about development and implementation, not usage. AI can help me draft the project report, blog post, and sales pitch, but is it okay for me to use it? I mean yes, but it's about time we made some driving rules.

The media houses have already begun, and are apparently adept at informing on cases written or summarized by AI, and only quality-assured by a human eye.

Microsoft has published a separate Co-pilot Copyright Commitment, so the media cases about writers losing income on their works do not, for my part, hit this product. The topic interests me anyway, even if I choose to trust Microsoft here.

How long will it be a quality stamp that a text is "100% human made”, and what types of texts will it apply?

Topics that should be focused

In any case, I and several colleagues have given us some thoughts on topics that should be focused in the coming time. My thoughts are about safety, transparency, solidarity, quality and sustainability. First of all, the most important thing for me is awareness of the topics, and connection between them.

  • Think about what you put into which AI service. Customer names, assignment details, project information and real numbers, or a combination thereof, and the like, are examples of information that should not be misconstrued. Know the security inherent in the services you use. Let be if not.
  • Streamlining isn't about selling something machine-made as if it were handmade. Let the fast food burger be just that, and be honest about it rather than presenting it as gourmet. Is something to be used externally for the enterprise written in whole or in part by AI. Make a habit of saying something about what, how, or how much. Maybe make a little “made by AI” stamp.
  • Be conscious of who is behind the service and what it is trained on. Can you vouch for its use with your values as a basis? If used in a work context, make sure you are also in control of your company's values.
  • Quality assured everything, ALWAYS. AI services can create erroneous information, also based on your own data. Don't trust that what you're sending from you is “good enough” until you're actually sure of it.
  • What does its use mean for the sustainability perspective? It takes huge amounts of energy to train the AI models, but the more prompts we make, the more resources are required the more of course for their use as well. Is it better if I settle for conventional types of search and help, or just take the courage to write something from scratch in some cases?

As always. Is this hard to deal with, but you realize you're going to have to?

The content of this guest blog was first published as a blog post on Daniel Sønstevold's LinkedIn profile. Kaupr's blog column is open for posts, analysis and debate. Send your article or article idea to morten@kaupr.io.