Okay, hi everybody, I see here are a lot of English-spoken persons here, therefore I switch to English, my broken English. I hope it's almost understandable. But sorry for that, the slides are in German. Fair compromise, just let me talk in English, but you have to read in German. Okay, my name is Peter Meyer, I'm the head of the Mebis Learning Management System in Bavaria for the Bavarian schools. We have a very, very big instance there with about 1.7 million students and 175,000 teachers. The teachers all can create their own courses within their own school category. We have a multi-talent system implemented in Moodle because that's why we are using Moodle in a whole different way than is expected. And sometimes in the future we have to get a new name, our product has to get a new name. It's not called anymore Mebis , it's called Bavarian Cloud for Schools, BYCS. BYCS, don't call it BYCS. But it's, yeah. This should be the new name because we have a lot of sister projects beside of us with the cloud storage, with messenger system and so on. And they created a new name which is a framework for all these applications used by schools. I already said this, we've got about more than 600,000 courses at this time in our system. It's quite really, really big. We have more than 21 million activities and it is the most used platform within the Bavarian schools. And we are decided, yeah, sorry? How many? One. Just one. Just one. We thought, I can take a talk about this also, but we thought about splitting it up. It was historical, about 15 years ago or so, we called it Bavarian Moodle and this was the grandfather of the actual system. And this was a regional system with about 50 schools or so per instance. Then it was switched to the current system or the father or mother of the current system. And it was one monolithic infrastructure because our Moodle partner at this time said this will be the best to do so. And then we, yeah, it was implemented like so. And since then, it is like how it is now. And we don't, and during the pandemic, we thought about splitting it up again to school instances, but then there would be much different problems arise and then we decided no. We stay in the monolithic infrastructure with our tenant, multi-tenant system. And we know how this system works. We know how it works in specific situations. And otherwise, we had to learn it on a different way. And of course, we would have had 6,000 single instances and then we said, oh, no, this is too much. And then we keep staying at one instance. And now we are quite happy to be there because it's nice. But the reason why we have a monolithic infrastructure or the fact that we have a monolithic infrastructure is also the reason why we developed our own AI infrastructure and not using the core infrastructure because the core does not support multi-tenancy. It's just one reason. There are a lot more, but this was the key. And the other main reason was we were a little bit earlier than Moodle. We have already the same architecture. I will show you soon. But we are, we have a little bit, a little different approach in one direction what Moodle make much, as a headquarter, made much difference than we did. And this little tipping point, I will say, make our system, I believe, yes, it's a bias, but I believe it's at this point it will be more flexible and more intuitive because the core system has to place everywhere placements and we neither need these placements. But this is one of the biggest differences, I believe. Tomorrow I have a meeting with Mr. Underberg. I don't think that this is possible, but I don't know. He wants to see our system tomorrow. The base of our system is the AI manager. It's quite similar to the AI subsystem, whole managing area, I say, or quite similar to the management area of the AI subsystem. But there are a whole, a bunch of other features that won't be ever in the AI subsystem, because these are features that we need for our students, for our pupils to work with AI. We have to do that because of legal restrictions or pedagogical guardrails. Therefore, we have a very specific and high customizable AI manager. This AI manager has two levels. The basic level is the administrator level. This is what the Moodle administrator can do. You can specify some settings about how many you can use, which models you can use and so on, or which models you can choose from. And then the second level is, we call it BYCS admins. This is an admin of the tenant. It's a tenant manager. He can decide and can configure the AI connections or the AI tools to the needed settings of a specific school. They can activate and deactivate students. They can give them rights, reduce their rights and so on. And the BYCS admins can also switch between different language models. They can use ChattGPT or GPT, different versions. They can use Gemini. They can choose Olama with several models. And they can... Yeah, it's not finished yet. Mistral is also going to have a connector. And we have one new connector that's not listed here because it's not really needed for us. It's called Tally. It's a project of the federal government. And with this connector, we don't have to buy our own access to language models. They bought it for all. And then we only need to select which models we want to use. It's a meta connector to use this whole infrastructure. Sorry? AWS Cloud? Amazon. No, I don't know what... It's not Amazon, no. It's our own structure. I don't know exactly what they did. As far as I know, they bought from Google, from Microsoft Azure, from IONOS. Bought services and share it within this infrastructure. And of course, you can extend this list of connectors to different providers by adding just a subplugin. It's just about a bunch of 150 lines of code or so to add another connector to another provider. Then we've got one of our pedagogical needs. The first one was the whole infrastructure. The second one is the AI control center. This is placed in the course. It's a block in the course. And with this, the teacher can activate and deactivate the AI system at all for this course. Or single functionalities. For example, he can activate or deactivate chat feedback. It's Marcus Green's AI text. Image generation, image analysis, and so on. You can switch on and off single functionalities, which you don't want to use in your pedagogical setting. And of course, and this was very, very, very important for our legal, for our attorneys. They have to view, the teachers must be able to view the prompts and the answers of the AI of all prompts of the students. They have to know what the students do. And therefore, we have here an opportunity to view all students' prompts. That was a key feature. Without this feature, we won't be able to run it in all schools. Then we added a new button to the tiny AI, to the tiny MCE editor. It's called tiny AI. And it creates a button where you can edit and work with your text, what you have in your text area. You can generate a new text, you can insert one, and just mark. You have to just mark the text you want to process. Then you can say translate, or to make an audio, or generate an image. Or if you haven't marked an image, you can analyze the image. If you have an image with handwritten text, you can extract the text from the image. It's all within this little tiny button. And yeah, it's a really, I believe, a really mighty tool, what you can do with this. Then we've got, obviously, I'm not a fan of it, but yeah. Everybody talks about chatbots, and we added a chatbot. And it's not quite a chatbot like you find on the internet with chatGPT or Gemini. The key functionalities are the same. But you have some more features. You can add personalities. The teacher can define a personality of the chatbot. And all students who are using this chatbot are communicating with this person. For example, the teacher can say, answer like Albert Einstein. And the teachers and the students will chat with Albert Einstein. Or answer like William Shakespeare. And he will answer in a verse form. It's a quite cool pedagogical feature. We just, don't leave this room now, but we sold this functionality to our legals by telling them you can limit the answers by adding some stuff here, some prompts here, and just answer mathematical questions. So they were absolutely amused. But something like this, you can also do with this functionality. And down here with this little button, it's a little bit like WhatsApp or the messenger type. If you click here, you get this functionality again that you can also create images or audios also within your chat. What I missed to say some time ago, you have each BYCS admin can limit how many requests per day a student or a teacher can do. And down here, it's really small. Here it is one of 50, for example, because it BYCS admin said 50 requests per day in this case are allowed. And this is your first one. And therefore, after 50, a pop-up will arise and say, hey, you have too much requests today. Calm down. Come tomorrow again. Yeah. It's just to limit costs. But it's also pedagogically reasons to do so. Then we have integrated and we contributed a lot to Marcus Green's AI text system, plug-in question type. And this works quite well. And you can do a lot of different settings there to do different stuff, to get a feedback or to something like here, you get an email. Something like here, you get a spell checking or grammar checking. It's also possible within your text with overlays. Yeah. I don't want to tell more about this one. Then this is our youngest baby now. We've got we created this was our pitch in the high case and mine pitch in the last last summer. And we generated a question generation plug-in, a QBank plug-in. With that, you can create based on topics. You can enter here topic and you can say, OK, I want five multiple choice questions. Then you get five multiple choice questions about this topic. But the key feature is you can select here course contents or activities. And then you can get questions about this content of the course or the activity you selected. Yes, and our roadmap until today was we have three phases. Phase one was we tested it with this whole ecosystem with 10 schools. It was from elementary schools until A-level schools or high schools and vocational training schools and so on. And it was the first phase was only for only teachers. And second phase, we extended it with students. And it was the same schools. But additionally, we added all the students of these schools. And from now on, from the start of the new school year, the new school year starting in two weeks, but we starting at the first of October, we are getting a step by step rollout to all our schools. We don't want to get them all at once. We want a slow scaling because we don't know how the providers or how the limits of the if we reach the limits of the providers. Therefore, we go step by step and have a look at the limits if we reach them or do we need more rate limits or higher rate limits or so. What about the budget? Who is paying for this? We've got a budget for about 1 million or so for this year. For tokens? Yes, for this year. And I believe this will be much enough, but we will see. In the first two phases, we have costs of about and we will see it was not really heavily used for about 300 euros. It was not as much as we expected, but it's much less than we expected. It was a lot of limits we analyzed that make it not easy for schools to provide the functionalities to their students. There were limitations, not technical limits, there were legal limits. And this was the case or the reason why a lot of schools really start very, very late in the last school year using it. And that was the reason why the costs are really, really small. I don't believe that, I'm really sure that about 1 million euros will be enough for the whole school year, but we will see. Which provider was more popular? Image generation was the most expensive one. Yeah, of course. Most expensive one. I'm surprised to hear that the teacher, the teacher had a strong dialogue with the people who were interested. So, why did you choose to do so? Because you had the kind of thing to do and there were also students in the front end. And you said that you didn't have much cost, so it was not really heavily used. Might this be because the people are thinking, oh, my teacher is going to read it, so I'll take my chances, it's normal, hey. Yes, but this was not the reason. The reason was that the teachers don't, yes, I believe this was the right word. They don't trust AI at this time in school. And it's hard to get a teacher that did his job 30 years or so without AI now to use AI or to integrate AI to their classes. And there are teachers who have used it to create their content, to do a lot of stuff with them. But on the side of the people, was there some heavy usage or was there more? The usage of students is limited because they have to be activated by the teachers, the AI functionalities within their courses. If they don't activate it, they can't use it. And you also can use the chatbot on the dashboard, but this is only allowed for students older than 14, 13, because of limitations of the providers. And therefore, it's hard to get them because our test group is 10 schools from elementary school up. And we have four schools or five schools that could use AI on dashboard and no one does it because of the legal limits we now killed. But for the second phase, it was the most limiting factor. Sorry, but if it's because you think there's mistrust of the teacher, is that the reason why the teacher can read every page? Because if I'm a student from the side of the people, I don't like this idea. Yeah. Yes, on the dashboard, there's a set nobody can view. No, not nobody, but the school, the head of the school can view into it, but only the head of the school. And just when there are some indications that there has to be an abuse or something like this. It's not a pedagogical mechanism. It's a legal mechanism. And in schools, there's nothing secret. Teachers always view the results or the work of the students. And we can't do something against it because there was the, I don't know the English word, it's a... I don't know if there's an English word at all for that. Sorry? Okay, data protection of the country, not Bavaria, but it was from Baden-Württemberg. And he said it must be that the teacher has to view, or must have the possibility to view the problems of the students. Yeah, it's all, I also looked like you when I heard this the first time and I thought, hey, this is not what I expected. But it's, they write it in a big paper about AI and all other data protection officers take it over to their own scripts. Question about privacy. How did you manage that in the neural context? Like the privacy subsystems, was that doable? I mean, the wrong stuff is going to be sent to external server and then the student might ask for their data to be deleted. You can't guarantee that. But in a sense... Yes, that's a big problem. We have several levels of security within our system. All communication from the client to us is, the learning, our Moodle is working as a proxy. All requests of the client are accepted by our Moodle instance, our plugin local AI manager. And then the AI manager creates a request, a new request, and hands it over to the API of the provider. And within this request we can control which metadata and so on are within this request. There is no IP address of the user. There is no name, username or something else from the user. There is only the prompt. So now we only have to take care about the prompt. And yes, this is a big problem, of course. Because OpenAI, obviously, but also Microsoft and Google does and all others does it also, they can ignore the requests for a minimum of 30 days. And also within an Azure system where you can deploy your own language model, there are the 30 days. And this was one of the major problems we faced in the last year. And how to handle this? This 30 days. And we managed it, said Microsoft turned off this abuse monitoring. And turned off the 30 days storage of all our prompts. And now we are really safe because they don't store it. It's how legal said, it's... Only for you? Yes, for our instance. But not only for the tenant of the federal government. But we are the only users of that. And they said we need that. And they struggled several months and now we got it until the next one or two weeks or so. Yeah. It was a hard work to do. So, yeah. I have a question on how you manage prompts. So how much content do you get in the course, faculty, the lab staff? How do you get it? How much does that include? What does that do to the number of tokens required for the process? Request as well? You can, for example, here in the chat, you have the teacher can adjust here all the options. For example, the length of the history which is sent to the LLM. That means you can adjust the amount of context. We from the system level do not send any information beside the information the user entered to the LLM. For this case. Yesterday we showed you another project of us. It's the AI assistant. It's an extension for the chat bot. And with this AI, with this extension, we deliver a lot of context, which page you are, what are the actual settings and so on. And then we pass it over to the LLM. But normally we only send the prompt, some system information, some system prompts we specify or the prompt of the persona or so, what the teacher entered, but no other information. So you don't know where the page or the screen is called? No. Not at this time. We are thinking about it.um die Informationen, die im Kurs gesammelt werden, als Kontext zu senden, damit sie mit dem Kurs reden können. Aber das ist noch nicht der Fall. Wir denken darüber nach, das ist in unserem Routenablauf. Ja, das ist ein Problem. Wir sind jetzt an den ersten Schritten. Sie sehen, wir haben die Funktionalitäten getestet. Studenten können auch im Kurs eine Image-Generation nutzen, auch in den Unterrichtsaufnahmen. Es ist schon möglich, sie alle zu nutzen, neben der Frage-Generation. Aber natürlich sind das die ersten Schritte, die wir machen. Wir haben bereits eine bestimmte Funktionalität, aber wir werden immer mehr Funktionalitäten und mehr pedagogische Nutzfälle hinzufügen, die in den Kursen nutzen können. Ich hoffe, dass wir immer mehr bekommen. Aber ja, das war das erste Mal, dass wir die Infrastruktur und das ganze Framework gestalten konnten. Und jetzt können wir mehr und mehr Funktionalitäten hinzufügen. Und nur um uns auf unsere Routenablauf zu achten, was wir für die nächsten Monate oder Jahre planen, ich weiß es nicht, wir wollen Funktionalitäten, die den Unterrichtsaufgaben unterstützen, die wir in den Kursen nutzen können. Das war ein Wunsch, ein Ergebnis unserer Testphase, dass die Lehrer eine Image uploaden. Und ich muss immer Alternativtexte hinzufügen. Das kann AI machen. Ein Knopf hinzufügen und uns dort hinzufügen, und dann wird die Alternativtexte automatisch entwickelt. Oder die Lehrer wollten Dialoge generieren, Audio-Dialoge, das bedeutet, dass sie Podcasts in ihren Klassen in ihren Kursen erstellen. Weil das ist, ich habe es gelernt, aber ich habe es nie gedacht, es ist ein neues Hot Shit in den Schulen, Podcasts erstellen. Und wir wollen AI-Aufgaben hinzufügen, und Sie haben gestern schon gesehen, wie wir es machen wollen. Ich kann es Ihnen in ein paar Minuten zeigen. Und wir wollen die Möglichkeit erzeugen, Funktionalitätsanrufe zu machen. Aber aus gesterns Sicht bin ich mir nicht sicher, ob es noch eine hohe Priorität hat. Ich glaube, es könnte jetzt eine niedrige Priorität sein. Es ist auch ein sehr intensives Wunsch unserer Lehrern aus den 10 Schulen, dass es eine Prompt-Database geben muss, damit die Lehrer ein paar zentrale Prompte bekommen oder ihre eigenen Prompte bestellen, ihre eigenen Prompte optimieren und so weiter. Das war ein wirklich großes Wunsch. Ich glaube, unsere nächsten Erlebnisse sind im Februar, unsere nächsten großen Erlebnisse, und ich glaube, die Prompt-Database und die AI-Aufgaben werden in den nächsten Monaten in der nächsten Erlebnisse unserer Systeme. Und natürlich wollen wir mit dem AI-Aufgaben verbunden werden. Wir haben bereits ein funktionierendes Prototyp für das. Wir haben einen AI-Manager, das bedeutet, dass unser AI-Manager der Anbieter des AI-Aufgabens ist. Und dann können wir auch die Funktionalitäten nutzen, die Moodle uns teilen wird. Ja, aber das größte ist, dass wir im nächsten Monat einen großen Rollout haben. Das ist der größte Punkt auf unserer Routemap. Das muss funktionieren, und danach wird dieses Liste von den Funktionalitäten, die wir auf diesem Liste arbeiten, verwendet. Und wir haben alle unsere Plugins, die sind alle Open Source, sie sind alle in der Plugins-Datei gelistet. Und ja, ihr könnt sie benutzen, testen, und anbieten. Ich würde mich wirklich freuen, wenn ihr anbietet. Und wir freuen uns, neue Ideen und Funktionen von anderen zu bekommen. Ja? Haben Sie ein Dashboard für den Admin, um zu sehen, wie viele Token es gibt? Ja. Natürlich. Nicht nur Token, auch kein Budget, aber wie viele Requests ein Student, ein User macht, wie viele Token er benutzt hat, wie viele Token er benutzt hat für einen Grund. Das ist die Veränderung zwischen unserem System und dem AI-Subsystem. Sie benutzen Placements und wir benutzen Gründe. Grunde sind, ein Grund kann von verschiedenen Plugins benutzt werden. Man muss nur einen Grund definieren und kann es in der Chatbot benutzen, Chatbot ist ein schlechtes Beispiel, aber auch im Bereich der Bildgeneration. Man kann es in verschiedenen Plugins benutzen. Und dann kann man die Funktionalität hinzufügen, wo das Plugin befestigt ist. Aber ja, unser Problem ist auch das Placement-Problem. Was Moodle will, ist Placements. Manchmal brauchen wir einen Positioner-Hook oder etwas wie das, wo wir unsere Ausgabe hinzufügen können. Zum Beispiel für die Alternativ-Texte. Wir brauchen einen Hookpunkt in der Modelle, wo die Modelle für den Filepicker ausgerendert sind. Das ist eine Veränderung. Ich sage nicht, dass unseres besser ist. Es ist flexibler. Es ist eine andere Art von Arbeiten. Aber wir haben das gleiche Problem. Moodle hat das auch. Wir müssen irgendwann irgendwo rein. Ja. Wir wählen den Weg mit Hookpunkten oder so etwas. Ja. Sägen wir alle Probleme? Was meinst du? Ja. Ja. Ja. Ja, es gibt Möglichkeiten, sie zu sägen oder zu entfernen. Dann kannst du sie beobachten oder nicht beobachten. Um sie nicht zu besorgen, ist nicht möglich. Denn zum Beispiel für den Chat brauchst du die Geschichte der Probleme. Sonst kannst du nicht mit dem Chatbot interagieren. Wenn ein User die Daten er oder sie möchte, dann ist das alles in Moodle. Ja, natürlich. Sie bekommen die Probleme auch. Sie bekommen sie, sie werden integriert in die Privacy API. Ja. Ich weiß nicht, wie viel Zeit es ist. Drei Minuten. Das kann sehr viel zeigen. Ich glaube. Ja. Ich habe noch eine Frage. Es geht um Mathematik. Wenn du etwas von der AI fragst und sie zurückkriegst, wie zeichnest du es? In Moodle? Wir haben Mathchecks und ich glaube Mathchecks und ich glaube, wir renderen dieses Bild. Ich weiß nicht genau, wie wir es gemacht haben. In JavaScript? Ja. Ja. Es gibt einen JavaScript-Call um die ganze Seite zu renderen und ich glaube, dass dieser Call ist möglich. Ich weiß nicht genau, was er gemacht hat. Es ist möglich, es ist offen. Okay. Gibt es noch Fragen? Ich glaube, ich kann es nicht mehr zeigen. Ja. Wir dürfen nicht evaluieren, was sie getan haben, aber wir haben gesehen, dass sie bereits das Chatbot und die Bildgeneration benutzt haben. Und ein Lehrer hat uns einen Blick auf die Geschichte gegeben. Und lassen Sie mich sagen, dass wir ich weiß nicht, wie ich es sagen soll, politisch sagen. Sie kommunizieren mit zum Beispiel einem Bildgenerationssystem, wie sie arbeiten, in einem unbeschafften Moment in der Schule, im Hinterzimmer oder so und etwas auf dem Wand und einige Zeichen ausdrücken. Und sie arbeiten oft so. Das war nur ein Beispiel. Wir erinnern uns, dass viele Schüler auch Fragen stellen. Sie wollen den Lehrer nicht fragen, weil sie denken, das sei eine dumme Frage. Und was wird der Lehrer denken? Das war auch eine unserer Erinnerungen. Das ist ein Positives, das war ein Negatives. Wir haben sehr wenige Daten über das, weil wir nicht über die Zeichen dürfen. Wir haben Kontakte, die sie nicht behalten und sie müssen sie nicht nutzen. Sie dürfen sie nicht nutzen. Ich weiß nicht, was ich mehr zu sagen habe. Ich glaube nicht, dass sie das Konsent bekommen. Vielleicht in Zukunft werden wir jetzt alle Daten und können sie vom 1. Oktober an nutzen. Wir können ein paar Daten analysieren, wie die Studierenden sie nutzen. Und unser Ziel ist, oder einer unserer Ziele ist, aber es ist in der nächsten Zukunft, es ist etwas weit weg, aber unser Ziel ist, ein Profil der Studierenden zu machen, sodass Lernungsgrenzen oder Grenzen oder so sich zeigen können für den Lehrer. Denn wenn jemand eine Lern- und Schreibungsgrenze hat, muss der Lehrer sie beobachten. Und wir wollen den Lehrern unterstützen, um die Studierenden zu entdecken und um den Studierenden zu helfen. Wir wollen sie nicht schulden, sondern wir wollen den Studierenden helfen, um den richtigen Unterstützung zu bekommen. Aber das ist in der nächsten Zukunft. Aber das wird eine Aufgabe unserer eigenen Hilfe sein. Wir werden es tun. Ich glaube nicht. Vielleicht können wir sie auf einem Meta-Level analysieren. Auf einem statistischen Level, aber nicht auf dem, was die Studierenden wirklich typen. Das ist der Grund, weil die Schulen sind verantwortlich für die Daten. Sie sind verantwortlich für die Daten ihrer Studierenden. Aber wir müssen sie prozessieren. Dann müssen wir brauchen, wir brauchen die Zustimmung der Schule. Wir bieten nur ein System und sie sind verantwortlich für ihre eigenen Daten. Es ist eine komische Konstruktion, aber es hat historische Gründe. Okay, dann vielen Dank für's Zuhören. Vielen Dank.