Daniel Sønstevold has thought about safety, transparency, solidarity, quality and sustainability in connection with AI, and asks in this blog post: Can we really support the use of AI? Sønstevold is M365 Strategist and Generativ AI Evangelist and consultant in EVIDI.
Daniel Sønstevold er M365 Strateg og Generativ AI Evangelist og konsulent i EVIDI.
The undersigned and others are vying to tap Copilot and AI into the ears of anyone willing to listen. How much do we think about the scale of going to use the services?
Microsoft has its Six Principles of Responsible Artificial Intelligence but the principles are mostly about development and implementation, not usage. AI can help me draft the project report, blog post, and sales pitch, but is it okay for me to use it? I mean yes, but it's about time we made some driving rules.
The media houses have already begun, and are apparently adept at informing on cases written or summarized by AI, and only quality-assured by a human eye.
Microsoft has published a separate Co-pilot Copyright Commitment, so the media cases about writers losing income on their works do not, for my part, hit this product. The topic interests me anyway, even if I choose to trust Microsoft here.
How long will it be a quality stamp that a text is "100% human made”, and what types of texts will it apply?
In any case, I and several colleagues have given us some thoughts on topics that should be focused in the coming time. My thoughts are about safety, transparency, solidarity, quality and sustainability. First of all, the most important thing for me is awareness of the topics, and connection between them.
As always. Is this hard to deal with, but you realize you're going to have to?
The content of this guest blog was first published as a blog post on Daniel Sønstevold's LinkedIn profile. Kaupr's blog column is open for posts, analysis and debate. Send your article or article idea to morten@kaupr.io.