Deploy a ChatGPT bot for a Kubernetes cluster

48381c4733388f012fa672f73492c22f.gif

Today I want to share an interesting project called “K8s ChatGPT Bot[1]”. The purpose of this project is to deploy a ChatGPT bot for K8s cluster. We can ask ChatGPT to help us resolve Prometheus alerts and get a concise answer, no more needing to be alone in the dark to resolve alerts!

We need to use Robusta[2]. If you don’t have Robusta yet, you can refer to “K8s – Robusta, K8s Troubleshooting Platform[3]” to build a Robusta platform.

Below is a screenshot of how the Robusta platform works:

c9d2783d31bf5bbe63e62ce4a2ce52b7.jpeg

You can check out the full demo video here:

1 em>

Run the K8s ChatGPT bot project

The robot project is implemented based on Robusta.dev[4], an open-source platform for responding to K8s alerts. Its workflow is roughly as follows:

  • Prometheus forwards alerts to Robusta.dev using a webhook receiver.

  • Robusta.dev asked ChatGPT how to fix Prometheus alerts.

2 em>

Prerequisites

  • Slack

  • Kubernetes cluster

  • Python 3.7 and above

3 em>

How to install Robusta

Generate Robusta configuration file

Prepare a Python virtual environment for Robusta.

$ python3.10 -m venv robusta
$ source robusta/bin/activate
(robusta) $ pip install -U robusta-cli --no-cache
Collecting robusta-cli
Downloading robusta_cli-0.10.10-py3-none-any.whl (223 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 223.8/223.8 kB 30.0 MB/s :00:00
Collecting pymsteams<0.2.0,>=0.1.16
  Downloading pymsteams-0.1.16.tar.gz (7.6 kB)
  Preparing metadata (setup.py)... done
...
Successfully installed PyJWT-2.4.0 appdirs-1.4.4 autopep8-2.0.1 black-21.5b2
cachetools-5.2.1 certifi-2022.12.7 cffi-1.15.1 charset-normalizer-3.0.1
...
ruamel.yaml.clib-0.2.7 six-1.16.0 slack-sdk-3.19.5 tenacity-8.1.0
toml-0.10.2 tomli-2.0.1 typer-0.4.2 typing-extensions-4.4.0 urllib3-1.26.14
 watchgod-0.7 webexteamssdk-1.6.1 websocket-client-1.3.3

Use robusta to generate a configuration file:

$ robusta gen-config
Robusta reports its findings to external destinations (we call them "sinks").
We'll define some of them now.

Configure Slack integration? This is HIGHLY recommended. [Y/n]: Y
If your browser does not automatically launch, open the below url:
https://api.robusta.dev/integrations/slack?id=xxxx

Configure Slack integration

Open the web page with a browser: https://api.robusta.dev/integrations/slack?id=xxxx

f88ebf8b7b09ea2b0bb28ec9d7a07665.jpeg

Update permissions:

5257dc1010f06e864e09e2e9fc4846d7.jpeg

Congratulations, you have successfully configured the Slack integration.

3b83c81087fecaeb3029c186d2b251be.jpeg

Now back to our Terminal terminal, we can see the following, indicating that the operation was successful:

$ robusta gen-config
Robusta reports its findings to external destinations (we call them "sinks").
We'll define some of them now.

Configure Slack integration? This is HIGHLY recommended. [Y/n]: Y
If your browser does not automatically launch, open the below url:
https://api.robusta.dev/integrations/slack?id=xxxx
You've just connected Robusta to the Slack of: devopsfans
Which slack channel should I send notifications to? #k8s-chatgpt-bot
Configure Robusta UI sink? This is HIGHLY recommended. [Y/n]: Y
Enter your Gmail/Google address. This will be used to login: [email protected]
Choose your account name (e.g your organization name): devopsfans
Successfully registered.

Robusta can use Prometheus as an alert source.
If you haven't installed it yet, Robusta can install a
pre-configured Prometheus.
Would you like to do so? [y/N]: y
Please read and approve our End User License Agreement:
https://api.robusta.dev/eula.html
Do you accept our End User License Agreement? [y/N]: y
Last question! Would you like to help us improve Robusta by sending exception reports? [y/N]: N ?
Saved configuration to ./generated_values.yaml - save this file for future use!
Finish installing with Helm (see the Robusta docs).
Then login to Robusta UI at https://platform.robusta.dev

By the way, we'll send you some messages later to get feedback.
(We don't store your API key, so we scheduled future messages using Slack's
API)

In the slack channel, we can also see:

fdb3f2e54145a595723629e82a9ad73b.jpeg

Installing Robusta with Helm3

Install and update the robusta repository.

$ helm repo add robusta https://robusta-charts.storage.googleapis.com & amp; & amp; helm repo update
"robusta" has been added to your repositories
Hang tight while we grab the latest from your chart repositories... ?
...Successfully got an update from the "kedacore" chart repository
...Successfully got an update from the "robusta" chart repository
...Successfully got an update from the "grafana" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ?Happy Helming!?

Update the generated_values.yaml file

Update the generated_values.yaml file with the following content:

playbookRepos:
  chatgpt_robusta_actions:
    url: "https://github.com/robusta-dev/kubernetes-chatgpt-bot.git"

custom Playbooks:
# Add the 'Ask ChatGPT' button to all Prometheus alerts
- triggers:
  - on_prometheus_alert: {}
  actions:
  - chat_gpt_enricher: {}

globalConfig:
  chat_gpt_token: YOUR KEY GOES HERE

Deploy Robusta to K8s

$ helm install robusta robusta/robusta -f ./generated_values.yaml \
--set clusterName=dev-cluster

Verify that both Robusta pods are running correctly and no error logs are found in the logs:

$ kubectl get pods -A | grep robusta
default alertmanager-robusta-kube-prometheus-st-alertmanager-0 2/2 Running 1 (4m19s ago) 9m25s
default prometheus-robusta-kube-prometheus-st-prometheus-0 2/2 Running 0 9m25s
default robusta-forwarder-6b7d8d9d88-2rv9d 1/1 Running 0 9m29s
default robusta-grafana-64944bfcdc-v97xh 3/3 Running 0 9m29s
default robusta-kube-prometheus-st-admission-patch-6zj4b 0/1 Completed 0 9m28s
default robusta-kube-prometheus-st-operator-7b985d7fb-c9f9t 1/1 Running 0 9m29s
default robusta-kube-state-metrics-688d794968-ll6gf 1/1 Running 0 9m29s
default robusta-prometheus-node-exporter-2k5f7 1/1 Running 0 5m24s
default robusta-prometheus-node-exporter-zxsrg 1/1 Running 0 9m29s
default robusta-runner-5868b494d6-m6292 1/1 Running 0 9m29s

$ robusta logs
setting up colored logging
2023-01-14 22:57:01.428 INFO logger initialized using INFO log level
2023-01-14 22:57:01.429 INFO Creating hikaru monkey patches
2023-01-14 22:57:01.429 INFO Creating yaml monkey patch
2023-01-14 22:57:01.429 INFO Creating kubernetes ContainerImage monkey patch
2023-01-14 22:57:01.430 INFO watching dir /etc/robusta/playbooks/ for custom playbooks changes
2023-01-14 22:57:01.431 INFO watching dir /etc/robusta/config/active_playbooks.yaml for custom playbooks changes
2023-01-14 22:57:01.431 INFO Reloading playbook packages due to change on initialization
2023-01-14 22:57:01.431 INFO loading config /etc/robusta/config/active_playbooks.yaml
2023-01-14 22:57:01.467 INFO No custom playbooks defined at /etc/robusta/playbooks/storage
2023-01-14 22:57:01.468 INFO Cloning git repo https://github.com/robusta-dev/kubernetes-chatgpt-bot.git. repo name kubernetes-chatgpt-bot
...
2023-01-14 22:57:07.364 INFO connecting to server as account_id=8302df56-c554-4129-8b95-d143d1f2e3a2; cluster_name=dev-cluster
2023-01-14 22:57:07.977 INFO Initializing services cache
2023-01-14 22:57:08.203 INFO Initializing nodes cache
2023-01-14 22:57:08.395 INFO Initializing jobs cache
2023-01-14 22:57:08.603 INFO Getting events history
2023-01-14 22:57:10.403 INFO Cluster historical data sent.
2023-01-14 23:04:43.681 INFO cluster status {'account_id': '8302df56-c554-4129-8b95-d143d1f2e3a2', 'cluster_id': 'dev-cluster', ' version': '0.10.10', 'last_alert_at': '2023-01-14 23:04:18.959377', 'light_actions': ['related_pods', 'prometheus_enricher\ ', 'add_silence', 'delete_pod', 'delete_silence', 'get_silences', 'logs_enricher', 'pod_events_enricher', 'deployment_events_enricher', 'job_events_enricher', 'job_pod_enricher', 'get_resource_yaml', 'node_cpu_enricher', 'node_disk_analyzer', 'node_running_pods_enricher', 'node_allocatable_resources_enricher', 'node_status_enricher', 'node_graph_enricher', ' oomkilled_container_graph_enricher', 'pod_oom_killer_enricher', 'oom_killer_enricher', 'volume_analysis', 'python_profiler', 'pod_ps', 'python_memory', 'debugger_stack_trace', 'python_process_inspector\ ', 'prometheus_alert', 'create_pvc_snapshot'], 'updated_at': 'now()'}

4 em>

Using Robusta

Now, we can finally use Robusta! By default, Robusta sends a notification when a K8s Pod crashes.

So let’s create a pod that crashes:

$ kubectl apply -f https://gist.githubusercontent.com/robusta-lab/283609047306dc1f05cf59806ade30b6/raw
deployment.apps/crashpod created

$ kubectl get pods -A | grep crash
default crashpod-64db77b594-cgz4s 0/1 CrashLoopBackOff 2 (21s ago) 36s

Once the pod reaches two restarts, we will receive a message about the pod crashing in the Slack channel, like this:

fdf3d081e5888fee4789173af5f5a2ff.jpeg

5 em>

Interact with ChatGPT

After our experiments, we have confirmed that Robusta is integrated with our Slack and K8s clusters. Next let’s interact with the ChatGPT bot!

Trigger a Prometheus alert immediately, skipping the normal delay:

$ robusta playbooks trigger prometheus_alert alert_name=KubePodCrashLooping namespace=default pod_name=example-pod
==================================================== =====================
Triggering action...
==================================================== =====================
running cmd: curl -X POST http://localhost:5000/api/trigger -H 'Content-Type: application/json' -d
'{"action_name": "prometheus_alert", "action_params":
{"alert_name": "KubePodCrashLooping", "namespace": "default",
"pod_name": "example-pod"}}'
{"success":true}

==================================================== =====================
Fetching logs...
==================================================== =====================
2023-01-14 23:14:33.463 INFO Error loading kubernetes pod default/example-pod. reason: Not Found status: 404
2023-01-14 23:14:33.481 INFO Error loading kubernetes pod default/example-pod. reason: Not Found status: 404
2023-01-14 23:14:33.503 INFO Error loading kubernetes pod default/example-pod. reason: Not Found status: 404
2023-01-14 23:14:33.505 ERROR cannot run pod_events_enricher on alert with no pod object: PrometheusKubernetesAlert(sink_findings=defaultdict(<class 'list'>, {'main_slack_sink': [<robusta.core.reporting .base.Finding object at 0x7fab53074e20>], 'main_ms_teams_sink': [<robusta.core.reporting.base.Finding object at 0x7fab53074700>], 'robusta_ui_sink': [<robusta.core.reporting.base.Finding object at 0x7fab40773a30>]}), named_sinks=['main_slack_sink', 'main_ms_teams_sink', 'robusta_ui_sink'], response={'success': True}, stop_processing=False, _scheduler=<robusta .integrations.scheduled.playbook_scheduler_manager_impl.PlaybooksSchedulerManagerImpl object at 0x7fab4088e0a0>, _context=ExecutionContext(account_id='8302df56-c554-4129-8b95-d143d1f2e3a2', cluster_name='dev-cluster'), obj=None, alert= PrometheusAlert(endsAt=datetime.datetime(2023, 1, 14, 23, 14, 33, 430401), generatorURL='', startsAt=datetime.datetime(2023, 1, 14, 23, 14, 33, 430406) , fingerprint='', status ='firing', labels={'severity': 'error', 'namespace': 'default', 'alertname': 'KubePodCrashLooping', 'pod\ ': 'example-pod'}, annotations={}), alert_name='KubePodCrashLooping', alert_severity='error', label_namespace='default', node=None, pod=None, deployment =None, job=None, daemonset=None, statefulset=None)
2023-01-14 23:14:33.524 INFO Error loading kubernetes pod default/example-pod. reason: Not Found status: 404
2023-01-14 23:14:33.696 ERROR CallbackBlock not supported for msteams
2023-01-14 23:14:33.697 ERROR error sending message to msteams
e=Invalid URL 'False': No schema supplied. Perhaps you meant http://False?

==================================================== =====================
Done!
==================================================== =======================

Now switch to Slack and we’ll see a new alert, and this time with a button for “Ask ChatGPT”!

f5d4ae8ef61e0c88628fcf78685cef0a.jpeg

That’s it! Congratulations, we just successfully installed our first K8s ChatGPT bot!

Example 2: Node capacity at 100%

Here is an example of a node reaching 100% capacity:

cfa4b73d3d0af8afff5b06c4d4ae0229.jpeg

6 em>

Robusta UI

Robusta has a UI for integration and also a pre-configured Promethus system, if you don’t have your own K8s cluster yet and want to try out this ChatGPT bot, you can use what Robusta already has!

070bd10794c1bae78e1f05b74045f2a6.jpeg

7 em>

Conclusion

It took us some time to finally set up the K8s + ChatGPT platform. This is a project formed by enthusiasts, and this project has a lot of potential. I hope you enjoyed this article.

If you don’t have your own K8s cluster and Prometheus monitoring system, you can use Robusta’s pre-configured Promethus monitoring system.

Related Links:

  1. https://github.com/robusta-dev/kubernetes-chatgpt-bot

  2. https://home.robusta.dev/

  3. https://medium.com/dev-genius/k8s-robusta-k8s-troubleshooting-platform-efd389b47f24

  4. https://github.com/robusta-dev/robusta


eed2d1a514b6e1b537afbc56e8bbad3f.gif


10T Technical Resources Giveaway! Including but not limited to: Linux, virtualization, containers, cloud computing, networking, Python, Go, etc. Reply 10T in the open source Linux official account, and you can get it for free!

Linux Study Guide
There are gains, click to watch