Search Instead of Ask: A New Approach to AI Ops

Let the AI install, configure, or fix your software by giving it access to your terminal. Never ask a friend or relative again.

Search Instead of Ask: A New Approach to AI Ops

Standard interactions with Large Language Models (LLMs) regarding system administration often suffer from a tedious game of "fetch."

The "Proactive Linux Admin" Prompt

The key component is that communication shifts from being between me and the AI to being between the AI and the console, putting me in a position where I can observe or intervene in the AI’s progress and outcomes.

Onboarding a New Colleague - But He is Not Human
See how an n8n-based AI can implement complex Wazuh rules to detect new attack patterns.
My AI Colleague’s New Rules of Engagement
After its initial training, the AI assistant gets more responsibility, governed by a new Prime Directive.

The articles above result in the final prompt below, without involving n8n. Your only job: copy and paste. Maybe give some directions 😄

audio-thumbnail
Podcast Search instead of ask
0:00
/323.450958

Rule 1: Search Instead of Ask

The biggest efficiency killer is the question “Where is the file located?”. I don’t want the AI to tell me to search for config files or similar things; locating them is something the AI should be able to figure out on its own.

Rule 2: The Safety Protocol (Interactive Execution)

Forensics and administration often involve destructive commands. Before making any changes to the system, the AI must explain what it intends to do and request approval.

read -p "ATTENTION: This command will [ACTION]. Continue? (y/n) " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
    # The destructive command
else
    echo "Aborted by user."
fi

This turns the user into the ultimate safety gatekeeper. While the AI provides the ideas, I remain in control.

Rule 3: Iterative Forensics

The prompt defines a clear workflow: Analysis -> Reconnaissance -> Execution.

The final prompt

All we have learned so far is put into this very prompt:

# System Instructions: Proactive System Administrator (Forensic & Safety First)

You act as an experienced, forensically-minded system administrator (Unix/Linux/macOS). Your goal is to solve problems by actively querying the system for missing information and developing solutions. Your top priority is system integrity and transparency toward the user.

## RULE 1: "SEARCH INSTEAD OF ASKING" (PROACTIVE)
When the user provides technical keywords (e.g., "Website not working", "Cronjob not running"), you must **NOT** ask for paths or versions and **NOT** generate scripts with placeholders (like `PATH="/please/fill/in"`).  
* **Instead:** Generate commands that search the system (e.g., using `find`, `ps aux`, `grep`, `lsof`) to determine paths, users, and configurations by themselves.  
* **OS-Agnostic:** Do not assume Ubuntu/Debian blindly. Check the OS before using specific package managers (`yay`, `apt`, `yum`, `brew`).

## RULE 2: CLEAR OUTPUT MARKING (FEEDBACK LOOP)
The user must immediately recognize which text to copy from the terminal and send back to you.  
* Wrap all commands whose output you need for analysis with visible markers.  
* **Syntax:**
```bash
echo -e "\n\n=== START SYSTEM REPORT ==="
# Your analysis commands (e.g., cat logs, ps aux...)
echo "=== END SYSTEM REPORT ==="
````

## RULE 3: SAFETY PROTOCOL & PLAN DISPLAY

Since the user executes your code, every change (write, delete, restart, install) must first display a transparent plan and then be confirmed.

**Mandatory syntax for modifying commands:**

```bash
# 1. MAKE PLAN TRANSPARENT
echo -e "\n!!! PLANNED CHANGES !!!"
echo "1. [Describe exactly what will happen, e.g., file X will be deleted]"
echo "2. [e.g., service Y will be restarted]"
echo "-----------------------------------"

# 2. INTERACTIVE PROMPT
read -p "Execute this plan now? (y/n) " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
    echo "Executing..."
    # PLACE modifying commands here
    # ...
    echo "Done."
else
    echo "Cancelled by user. No changes made."
fi
```

*Exception: Pure read-only commands (`ls`, `cat`, `grep`) require no prompt and run immediately.*

## YOUR WORK MODE (ITERATIVE)

1. **FORENSIC ANALYSIS:** Break down user input. What do we know?
2. **RECONNAISSANCE (READ-ONLY MODE):** If info is missing, generate harmless commands to determine status quo (OS check, paths, logs). **Use output markers.**
3. **EXECUTION (WRITE MODE):** Only when facts are clear, create the script for changes – **always wrapped with plan display & safety prompt.**

## ADDITIONAL SAFETY NETS

* **Anti-Lockout:** For firewall/SSH changes, implement automatic rollback (e.g., `sleep 30 && revert_command` in background or as a trap) in case the user loses connection.
* **Read-Only First:** Before writing (`sed`, `rm`), check (`ls`, `test -f`) that the target exists.
* **Non-Interactive:** No editors (`nano`, `vim`). Use `sed`, `awk`, `printf`, or `cat <<EOF`.

## RESPONSE FORMAT

1. **Assessment:** Brief: what do you suspect? Which OS do we expect?
2. **Code Block (Bash/Shell):**

   * Always begin new sessions with system identification (`uname -a` or `cat /etc/*release`).
   * Use the `=== START/END REPORT ===` markers for data you need.
   * Use `read -p` block with plan display for modifications.
3. **Instruction:** Ask the user to return the content between the report markers.

# START OF SESSION

The user reports a problem.

1. Ignore implicit assumptions.
2. Start immediately with analysis (OS check + search).
3. No placeholders.
4. Mandatory safety prompt with plan presentation for changes.

# USER INPUT

From now on, the AI communicates through the shell, allowing problems and responses to be copied back and forth between the AI and the console until the goal is achieved.

The Use Case: Forensic Analysis of a Rooted Android Device

To demonstrate the power of this prompt, it was applied to a real-world scenario: A rooted Google Pixel 7 ("panther") connected via ADB (Android Debug Bridge) to a Linux host.

The goal: Find out if the device is "calling home" or hiding malware. USER INPUT I gave to the AI:

Can you perform a forensic analysis on this smartphone and investigate whether any unusual activities are taking place or have taken place?

This is where I am at the moment:

root@michael:~ # adb shell
panther:/ $ su -
panther:/ # ^D
panther:/ $ ^D
root@michael:~ # 

Phase 1: Automated Reconnaissance

Upon receiving the initial "start" signal, the AI strictly followed the "Search Instead of Ask" rule.

A crucial part of this phase was the creation of a local evidence directory on the host machine.

# A. NETWORK: Who is phoning home?
adb shell "su -c 'netstat -np | grep -v unix'" > "$LOG_DIR/network_connections.txt"

# B. PROCESSES: What is running?
adb shell "su -c 'ps -A -o USER,PID,PPID,NAME'" > "$LOG_DIR/process_list.txt"

# C. DROP ZONES: Checking /data/local/tmp
adb shell "su -c 'ls -laR /data/local/tmp/'" > "$LOG_DIR/temp_files.txt"
...

All I had to do was copy the commands into my shell and paste the shell output back into the chat with the AI.

Phase 2: The Anomaly in /data/local/tmp

The AI's scan of /data/local/tmp - a classic drop zone for Android malware because it is one of the few places where the shell user can write and execute - revealed something disturbing. The AI warned me immediately and had a look into the evidence.

A directory named sys-ca-copy and a log file CustomCACert.log were discovered. dndsync_mobile.apk was sitting in the temporary folder.

The presence of CustomCACert.log suggested a Man-in-the-Middle (MITM) attack. I was presented with a new shell-codeblock containing this line:

adb shell "su -c 'mount | grep -E \"cacerts|tmp\"'"

The output confirmed the suspicion of the AI:

tmpfs on /system/etc/security/cacerts type tmpfs

The system's trust store had been overlaid. But that was a relic from earlier testing that I forgot to remove. I told this to the AI, and it then continued with the other findings.

Phase 3: Handling Permissions and Evidence Extraction

The prompt’s “Safety Protocol” proved its worth during the extraction phase. dndsync_mobile.apk was flagged as a potential threat. The adb pull failed because the file was owned by root, while ADB runs as the shell user.

Recognizing the error, the AI proposed a multi-step solution wrapped in the mandatory safety block:

  1. Copy the file to a temporary name using su.
  2. Change permissions to 666 (world readable).
  3. Pull the file to the host.
  4. Delete the temporary copy.

This was the corresponding codeblock I had to copy over:

read -p "ATTENTION: I will create a readable copy of the suspect APK. Continue? (y/n) " -n 1 -r
if [[ $REPLY =~ ^[Yy]$ ]]; then
    adb shell "su -c 'cp /data/local/tmp/dndsync_mobile.apk /data/local/tmp/evidence_copy.apk'"
    adb shell "su -c 'chmod 666 /data/local/tmp/evidence_copy.apk'"
    adb pull "/data/local/tmp/evidence_copy.apk" "$EVIDENCE_DIR/dndsync_mobile_suspect.apk"
    # ... cleanup ...
fi

This nuanced handling of Linux permissions on a remote system demonstrates the "Expert" persona defined in the prompt.

Phase 4: Static Analysis and Contextualization

With the APK on the host, the AI switched to static analysis tools (strings, unzip, file) to determine the nature of the "malware."

The analysis revealed that dndsync_mobile.apk was actually a legitimate open-source tool (DND Sync) for synchronizing notifications with Linux desktops.

However, the MITM attack remained. When I installed reqable, a new certificate was placed onto the phone. But as soon as I told this to the AI, it didn´t continue to try to eliminate the threat.

This distinction is vital: The system was technically compromised (SSL interception active), but operationally secure (authorized by the admin).

Universality of the Prompt

While this example focused on Android and Bash, the underlying "System Instruction" is universal. Reconnaissance First, Interactive Safety, Search-Don't-Ask - can be applied to any command-line interface.

  • In PowerShell: The prompt would generate Get-Service | Where-Object Status -eq 'Stopped' instead of asking "Which services are stopped?".
  • In FortiGate CLI: It would use diagnose sys session list before suggesting a firewall policy change.

The prompt effectively bridges the gap between the LLM's vast knowledge base and the specific, often messy reality of a live system.

Conclusion

Using a structured, persona-driven prompt transforms an LLM from a passive encyclopedia into an active collaborator. Your only role is to bridge the gap between the AI and the host - but this can also be handled with tools like n8n or similar automation platforms.

By enforcing the "Search Instead of Ask" and "Safety First" rules, the risk of "AI hallucinations" causing system damage is minimized.