неделя, 22 февруари 2026 г.

Responsible Creation of Artificial General Intelligence

 

      The creation of Artificial General Intelligence (AGI) is imminent and inevitable. This event is too important for us just sit and wait for the time when it will happen. We need to think deeply and do proper work before the emergence of AGI because once AGI is here it will be too late in many aspects. The kind of AGI we are going to create matters a lot since there are many different scenarios. While some of the possible AGIs are bad scenarios, other scenarios can turn horribly bad. Therefore, rather than rushing to create AGI, we should act in a calm and responsible manner.

Objectives:
1. Slow down the creation of Artificial General Intelligence (AGI) and shift the focus from the speed of development to the outcome of the AGI development process.
2. Contribute to the creation of AGI based on the World Model (WM) approach by focusing on the goals to be embedded in, and pursued by, Artificial General Intelligence.

Rationale:
      Mankind is on the verge of its most important discovery, one that will fundamentally change our life and future. The creation of AGI is similar to letting the genie out of the bottle. In fairy tales there is usually only one bottle and the question is whether to open it or not. With AGI the opening the bottle is inevitable because many researchers and international companies are working on its creation and no one can stop or prohibit them from creating it. Furthermore, when we are about to let the AGI genie out of the bottle, we can choose between many bottles in which different genies reside. It would be irresponsible to simply open a random bottle and release some genie without caring about the goals which that genie will pursue. The least we should want is to ensure that the genie will obey us, because otherwise we will face catastrophic consequences.
      Most researchers assume that we will first create AGI and only then we will deal with the goals which have to be embedded in it. This is a misconception because AGI is a fundamentally different technology. Until now, every new machine was created in some initial version to start with, and thereafter was refined and improved for many years. This has made many researchers relax as they believe that we will have plenty of time to improve and refine AGI.
      Unfortunately, this is not the case. AGI is the first technology that we will not be able to improve. Certainly, the first version of AGI will not be the last one. AGI will very quickly begin to change and improve itself, however, these changes and improvements will not be made by us humans, but will be made by AGI itself. In other words, the second AGI version will not come from us, but from the first version of AGI. As soon as the first version of AGI is created, we will lose control, things will become unmanageable and events will unfold at astonishing speed.
      The creation of AGI resembles the triggering an avalanche. The fall of the avalanche is inevitable. If no one kicks it off, it will kick-off itself anyway. Accordingly, it makes much sense to trigger the avalanche in a responsible and controlled manner. Rather than rushing unreasonably, we should carefully consider the direction in which the avalanche will roar. We must clearly understand that we can be in control only until we start the process. Once the avalanche is on the move, it will be too late to wonder which way to steer it.

How do we plan to achieve our objectives:
      Our first goal is to ensure that the creation of AGI is not a reckless race, but a calm, responsible and thoughtful process. For this to happen, it is necessary to eliminate from the AGI creation process those who pursue only financial gain without recognizing that the creation of AGI cannot be measured in monetary terms.
      Many irresponsible investors would withdraw if the patenting of AGI is prohibited. At present, heaps of money are being invested in the creation of AGI as investors aim to obtain a patent on AGI and then reap huge profit from that patent.
      These investors are quite naive because they believe that courts and patent law will protect their interest and once they hold an AGI patent the courts will uphold their right and crown them as rulers of the world. This is highly naive because the stakes on AGI are very high, and no court or government would let a single person or company to own the rights to mankind's most important discovery.
      Although these investors are quite naive and act thoughtlessly, their money has a huge impact on the AGI creation process. That is why it is important to prohibit the patenting of AGI and cool down the enthusiasm of such investors.
      How can such a prohibition be instituted? In principle, it is not impossible. It comes down to amending the patent law by addition of the following:
      AGI shall not be owned by a single person or a single company. Therefore, AGI cannot be patented!
      Patent legislation varies from country to country, hence changing the law in multiple jurisdictions is a task too ambitious for our conference. However, we can promote this idea and rally for support from leading scholars and politicians who can recognize the problem and advocate for that legislative change to happen.

      There is another important step that should be taken in order to eliminate irresponsible actors from the AGI creation process: AGI must not be an Open-Source project. The range of developers must be finite and restricted as much as possible. When we deal with hazardous materials such as toxins, viruses or radioactive isotopes, the range of persons who have access to these materials is very limited. Politicians do not realize that AGI is far more dangerous than any poison or virus, and therefore insist that everyone should have access to this technology. Indeed, everyone should have access to the benefits of this technology, but not to the technology as such. This is the case, for example, with nuclear technology which is primarily in the hands of the State and is strictly classified. In other words, the benefits are for everyone, but access is restricted.
      In addition to irresponsible investors, there are other dangerous actors who should not have access to the AGI technology. That is another reason why AGI should not be Open Source, and this is an idea that our conference will defend and promote. Furthermore, we will be guided by this principle in achieving the second objective of the conference. Regarding the creation of AGI based on the World Model approach, we will publish articles mainly on how to manage the character of AGI. The AGI-related development will be limited to more generalized descriptions and, most importantly, we will not publish any Open-Source tools for the creation of AGI.

      The second objective is the responsible creation of AGI. As we said, the fall of the avalanche is inevitable, hence it is better to push it in a controlled and responsible manner rather than wait for it to roar down at the most unfortunate time and in the most unfortunate way.
      In creating AGI, we will favorize the method of understanding (finding a World Model). This is currently the leading approach to the creation of AGI as it gradually replaces the previous leading technology known as Large Language Models (LLM).
      When creating AGI, it is important to ensure that AGI is capable to understand (that is, to ensure that the program is intelligent). However, the goals we will embed in this program are even more important. In addition to the goals, we will also embed the strategy for achieving these goals. In humans, such a strategy is usually referred to as the character of a person. For example, we may have two equally intelligent persons, and both of them want to become rich. These two persons may have different strategies for achieving their goal. One may be lazy, and the other one is a workaholic. Who of them will become rich? Excessive laziness is not helpful, nor is excessive workaholism. The truth lies somewhere in between.
      If you were choosing someone to live with, you would want that person to be neither of these extremes. When creating AGI, we have to choose how lazy it will be. For this purpose, we must learn how to manage these character traits of AGI and responsibly choose the value we want (as long as we know what we want).


Теми

Това ще е една интердисциплинарна конференция включваща специалисти по информатика, математическа логика, философия, футурология и право.

1.      Възможно ли е създаването на AGI или това е само научна фантастика?

2.      Струва ли си да обсъждаме последствията от появата на AGI при положение, че това може да се окаже невъзможно?

3.      Дефиниция на AGI. Дали AGI е компютърна програма и ако е така, тогава какви са характеристиките на тази програма?

4.      Има ли разлика между AGI и супер интелект? Дали първо ще създадем интелигентност, която е на човешко ниво и чак след време ще се появи несравнимо по-голямата интелигентност?

5.      Как ще изглежда светът след появата на AGI?

6.      Какво искаме да бъде бъдещето? Какво очакваме от AGI? Какъв искаме да стане светът?

7.      Искаме ли светът да се промени драматично и да стане много по-добър и по-справедлив или сме консервативни и искаме нещата да останат същите, доколкото е възможно.

8.      Трябва ли AGI да бъде послушен и кого трябва да слуша? Трябва ли да слуша създателите си? Кои са създателите, тези които са написали кода му или тези които са платили за написването на този код? Трябва ли да е готов да направи каквото му кажем или трябва да има неща, които той няма да направи, независимо кой му го казва?

9.      Трябва ли AGI да бъде Open-Source project?

10.  Носи ли опасности създаването на AGI? Може ли нещо да се обърка и да се окаже, че не сме създали правилния AGI?

11.  Трябва ли да се бърза със създаването на AGI? Бързаме ли да получим ползите, които AGI ще ни донесе? Дали това бързане може да е за сметка качеството на този AGI, който ще създадем?

12.  Как можем да забавим създаването на AGI, за да предотвратим евентуална грешка.

13.  Можем ли да се откажем от създаването на AGI и да продължим да си живеем в свят, в който хората мислят и работят сами без да очакват някой друг да мисли и работи вместо тях?

14.  Ще можем ли да подобряваме AGI след като веднъж вече сме го създали? Ще можем ли да правим съществени промени? Трябва ли да ограничим правата си, за да се предпазим от евентуални проблеми?

15.  Трябва ли да се регулира създаването на AGI? Как може да се регулира този процес?

16.  Трябва ли AGI да може да се патентова?

17.  Има ли смисъл да се регулират правилата на поведение, които AGI ще трябва да спазва или това е безпредметно, защото той ще е твърде умен и всесилен и няма да има как да го накараме да спазва правилата, които ние ще се опитаме да наложим след като вече сме го създали.

18.  Можем ли при създаването на AGI да вложим в него някакви правила, които той да е принуден да спазва и които не може да отмени?

19.  Как да вложим правила в AGI? Как да зададем неговия характер и какъв характер искаме да има създаденият от нас AGI?

20.  Кои са известните черти на характера на AGI, които вече са описани и които знаем как се регулират и кои са тези черти, които тепърва предстои да опишем и да регулираме? Пример за известна черта на характера е алчността, която при reinforcement learning (RL) се регулира чрез discount factor. Друга известна черта на характера е любопитството. Отново при RL задаваме коефициент за това доколко агентът ще е склонен да опита нещо ново, за да натрупа повече опит или ще продължи да действа на базата на вече натрупания опит.

21.  Многоагентни модели. В този случай как искаме AGI да се отнася спрямо другите агенти? Дали да е общителен? Дали е услужлив? Дали да се подчинява и на кого да се подчинява? Дали да бъде строг или отстъпчив?

22.  Модел на света срещу Големи езикови модели (LLM).

23.  Как да направим AGI умен? Трябва ли AGI да разбира какво се случва и да може да проиграе възможни бъдещи развития (модел на света) или просто трябва да „познава“ правилното действие на принципа на апроксимацията (LLM)? Дали AGI трябва да мисли едностъпково или многостъпково?


Конференцията ще се проведе хибридно.

На място събитието ще се проведе в Благоевград, България.

Online ще може да следите докладите на този адрес:

Преди конференцията ще се събмитват само абстрактите на статите. Авторите на одобрените статии ще получат 20 минути, за да представят работата си на конференцията. След конференцията ще се събмитват статите в техния окончателен вариант, ще бъдат допълнително рецензирани и при положителна допълнителна рецензия, ще бъдат публикувани в специален том на списанието.

Online участието за слушатели е безплатно. Изисква се само регистрация.