I’ve read a lot of books and articles and watched a lot of videos, and I don’t remember ever encountering the word “dangerous” before. The closest warnings I can remember relate to the unintended deletion of files and database records.

But, since an Artificial General Intelligence could write new code and edit itself, I wondered if it could also execute self-written and/or self-modified code.

I threw 2 lines of code into a text file so I could simultaneously test whether this concept would work with more than 1 line of code.

With just 1 line of code, a separate file executed the code in this text file.

Therefore, it is possible for an intelligent machine to execute self-written code.

For the record, I used a text file because of an error I was receiving with another file type. This is not optimized code; merely conceptual code.

Also, a separate file isn’t necessary. An intelligent machine could alternately build a multi-line exec statement. I tested the code as a separate file because the contents would be editable by the AI.

The articles and videos referring to the “exec” statement as “dangerous” did so in the context of creating vulnerabilities for hackers, but my experiment certainly seems to add to those warnings.