SQL Ecosystem

This skill should be used when working with SQL databases, "SELECT", "INSERT", "UPDATE", "DELETE", "CREATE TABLE", "JOIN", "INDEX", "EXPLAIN", transactions, or database migrations. Provides comprehensive SQL patterns across PostgreSQL, MySQL, and SQLite.

$ Instalar

git clone https://github.com/takeokunn/nixos-configuration /tmp/nixos-configuration && cp -r /tmp/nixos-configuration/home-manager/programs/claude-code/skills/sql-ecosystem ~/.claude/skills/nixos-configuration

// tip: Run this command in your terminal to install the skill


name: SQL Ecosystem description: This skill should be used when working with SQL databases, "SELECT", "INSERT", "UPDATE", "DELETE", "CREATE TABLE", "JOIN", "INDEX", "EXPLAIN", transactions, or database migrations. Provides comprehensive SQL patterns across PostgreSQL, MySQL, and SQLite.

<sql_fundamentals> <data_types> ANSI SQL standard data types supported across major databases -- Numeric types INTEGER, SMALLINT, BIGINT DECIMAL(precision, scale), NUMERIC(precision, scale) REAL, DOUBLE PRECISION

-- String types CHAR(n), VARCHAR(n), TEXT

-- Date/Time types DATE, TIME, TIMESTAMP, INTERVAL

-- Boolean BOOLEAN

-- MySQL specific TINYINT, MEDIUMINT ENUM('value1', 'value2'), SET('a', 'b', 'c') JSON (stored as text internally)

-- SQLite specific -- Uses type affinity: TEXT, INTEGER, REAL, BLOB, NULL -- Any type name accepted but mapped to affinity Prefer ANSI types for portability; use DB-specific types when features are needed </data_types>

<ddl_patterns> Table creation with constraints CREATE TABLE users ( id SERIAL PRIMARY KEY, email VARCHAR(255) NOT NULL UNIQUE, name VARCHAR(100) NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,

CONSTRAINT email_format CHECK (email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')

);

CREATE TABLE orders ( id SERIAL PRIMARY KEY, user_id INTEGER NOT NULL, total DECIMAL(10, 2) NOT NULL, status VARCHAR(20) DEFAULT 'pending',

CONSTRAINT fk_user FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE

);

-- Add column with default (PostgreSQL 11+ is instant) ALTER TABLE users ADD COLUMN active BOOLEAN DEFAULT true;

-- Rename column ALTER TABLE users RENAME COLUMN name TO full_name;

-- Add constraint ALTER TABLE users ADD CONSTRAINT unique_phone UNIQUE (phone);

-- Drop constraint ALTER TABLE users DROP CONSTRAINT unique_phone;

-- Unique index CREATE UNIQUE INDEX idx_users_email_unique ON users(email);

-- Composite index (order matters for query optimization) CREATE INDEX idx_orders_user_status ON orders(user_id, status);

-- Partial index (PostgreSQL) CREATE INDEX idx_active_users ON users(email) WHERE active = true;

-- Expression index (PostgreSQL) CREATE INDEX idx_users_lower_email ON users(LOWER(email)); <decision_tree name="index_selection"> What type of queries will use this index? <if_yes condition="Equality lookups">B-tree (default)</if_yes> <if_yes condition="Range queries">B-tree</if_yes> <if_yes condition="Full-text search">GIN with tsvector (PostgreSQL)</if_yes> <if_yes condition="JSON containment">GIN (PostgreSQL)</if_yes> <if_yes condition="Geospatial">GiST (PostgreSQL)</if_yes> </decision_tree> </ddl_patterns>

<dml_patterns> Query patterns for data retrieval -- Basic select with filtering SELECT id, email, name FROM users WHERE active = true ORDER BY created_at DESC LIMIT 10 OFFSET 0;

-- Aggregate with grouping SELECT status, COUNT() as count, SUM(total) as revenue FROM orders WHERE created_at >= '2024-01-01' GROUP BY status HAVING COUNT() > 10 ORDER BY revenue DESC;

-- Multi-row insert INSERT INTO users (email, name) VALUES ('user1@example.com', 'User One'), ('user2@example.com', 'User Two'), ('user3@example.com', 'User Three');

-- Insert with returning (PostgreSQL) INSERT INTO users (email, name) VALUES ('new@example.com', 'New User') RETURNING id, created_at;

-- Upsert (PostgreSQL) INSERT INTO users (email, name) VALUES ('user@example.com', 'Updated Name') ON CONFLICT (email) DO UPDATE SET name = EXCLUDED.name;

-- Upsert (MySQL) INSERT INTO users (email, name) VALUES ('user@example.com', 'Updated Name') ON DUPLICATE KEY UPDATE name = VALUES(name);

-- Update with subquery UPDATE orders SET status = 'cancelled' WHERE user_id IN (SELECT id FROM users WHERE active = false);

-- Update with join (PostgreSQL) UPDATE orders o SET status = 'vip' FROM users u WHERE o.user_id = u.id AND u.vip = true;

-- Update with returning (PostgreSQL) UPDATE users SET active = false WHERE id = 1 RETURNING *;

-- Delete with subquery DELETE FROM orders WHERE user_id IN (SELECT id FROM users WHERE active = false);

-- Truncate (faster for all rows, resets sequences) TRUNCATE TABLE logs; TRUNCATE TABLE logs RESTART IDENTITY; -- PostgreSQL

-- Soft delete pattern (prefer this) UPDATE users SET deleted_at = NOW() WHERE id = 1; Prefer soft deletes for audit trails; use hard deletes only for GDPR/compliance

-- PostgreSQL with pg (Node.js) client.query( 'SELECT * FROM users WHERE email = $1 AND status = $2', [userEmail, status] )

-- MySQL with mysql-connector (Python) cursor.execute( "SELECT * FROM users WHERE email = %s AND status = %s", (user_email, status) )

-- MySQL with mysql2 (Node.js) connection.execute( 'SELECT * FROM users WHERE email = ? AND status = ?', [userEmail, status] )

-- SQLite with sqlite3 (Python) cursor.execute( "SELECT * FROM users WHERE email = ? AND status = ?", (user_email, status) )

-- Go with database/sql db.Query( "SELECT _ FROM users WHERE email = $1 AND status = $2", userEmail, status ) NEVER use string concatenation or template literals with user input - this enables SQL injection attacks -- DANGEROUS: SQL injection vulnerability query = "SELECT _ FROM users WHERE email = '" + user_input + "'" query = f"SELECT * FROM users WHERE email = '{user_input}'"

-- If user_input = "'; DROP TABLE users; --" -- Executes: SELECT * FROM users WHERE email = ''; DROP TABLE users; --'

-- SAFE: Escape wildcards before using in LIKE -- Python: escaped = userinput.replace('%', '\%').replace('', '\_') -- Then use parameterized query: cursor.execute( "SELECT * FROM products WHERE name LIKE %s", ('%' + escaped_input + '%',) )

-- PostgreSQL: Use ESCAPE clause explicitly SELECT * FROM products WHERE name LIKE '%' || $1 || '%' ESCAPE '';

-- Alternative: Use position() or strpos() for exact matching SELECT * FROM products WHERE position($1 in name) > 0; Wildcards % and _ in user input can bypass intended restrictions

-- SAFE: Whitelist allowed values (Python example) ALLOWED_COLUMNS = {'id', 'name', 'email', 'created_at'} ALLOWED_TABLES = {'users', 'products', 'orders'}

if column_name not in ALLOWED_COLUMNS: raise ValueError(f"Invalid column: {column_name}") if table_name not in ALLOWED_TABLES: raise ValueError(f"Invalid table: {table_name}")

-- PostgreSQL: Use quote_ident() for identifiers SELECT quote_ident($1) FROM quote_ident($2);

-- Use identifier quoting as additional protection query = f'SELECT "{column_name}" FROM "{table_name}"' Never use user input directly for table/column names; always validate against whitelist </dml_patterns>

-- Foreign Key FOREIGN KEY (user_id) REFERENCES users(id) FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE SET NULL

-- Unique UNIQUE (email) UNIQUE (user_id, product_id) -- composite unique

-- Check CHECK (price > 0) CHECK (status IN ('pending', 'active', 'completed'))

-- Not Null NOT NULL

-- Default DEFAULT CURRENT_TIMESTAMP DEFAULT 'pending' </sql_fundamentals>

<query_patterns> Return only matching rows from both tables SELECT u.name, o.total FROM users u INNER JOIN orders o ON u.id = o.user_id WHERE o.status = 'completed'; <use_case>When you need data from both tables and only care about matches</use_case>

UNION ALL

-- Recursive case: subordinates
SELECT e.id, e.name, e.manager_id, ot.level + 1
FROM employees e
INNER JOIN org_tree ot ON e.manager_id = ot.id

) SELECT * FROM org_tree ORDER BY level, name; <use_case>Tree structures, bill of materials, path finding</use_case>

<window_functions> Assign unique sequential numbers SELECT name, total, ROW_NUMBER() OVER (ORDER BY total DESC) as rank FROM orders;

-- Partition by user SELECT user_id, total, ROW_NUMBER() OVER (PARTITION BY user_id ORDER BY created_at DESC) as order_num FROM orders; <use_case>Pagination, deduplication, ranking</use_case>

<schema_design> First Normal Form: Atomic values, no repeating groups -- Violation: comma-separated values CREATE TABLE bad_orders ( id INTEGER, products TEXT -- 'apple,banana,orange' );

-- 1NF compliant: separate rows CREATE TABLE order_items ( order_id INTEGER, product_id INTEGER, PRIMARY KEY (order_id, product_id) );

-- 2NF compliant: separate product table CREATE TABLE products ( id INTEGER PRIMARY KEY, name TEXT ); CREATE TABLE order_items ( order_id INTEGER, product_id INTEGER REFERENCES products(id), quantity INTEGER, PRIMARY KEY (order_id, product_id) );

-- 3NF compliant: separate locations CREATE TABLE locations ( zip_code TEXT PRIMARY KEY, city TEXT ); CREATE TABLE users ( id INTEGER PRIMARY KEY, name TEXT, zip_code TEXT REFERENCES locations(zip_code) );

<decision_tree name="normalization_level"> What are the priority requirements? <if_yes condition="Data integrity and minimal redundancy">Normalize to 3NF</if_yes> <if_yes condition="Read performance critical">Consider denormalization for hot paths</if_yes> <if_yes condition="Write-heavy with simple reads">Normalize fully</if_yes> <if_yes condition="Reporting/analytics">Consider star schema denormalization</if_yes> </decision_tree>

CONSTRAINT unique_active_email UNIQUE (email) WHERE deleted_at IS NULL

);

-- Query active records SELECT * FROM users WHERE deleted_at IS NULL; <use_case>Audit trails, data recovery, compliance</use_case>

-- Auto-update trigger (PostgreSQL) CREATE OR REPLACE FUNCTION update_updated_at() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = CURRENT_TIMESTAMP; RETURN NEW; END;

$$ LANGUAGE plpgsql;

CREATE TRIGGER orders_updated_at BEFORE UPDATE ON orders FOR EACH ROW EXECUTE FUNCTION update_updated_at();

INDEX idx_commentable (commentable_type, commentable_id)

); Cannot enforce FK constraint; validate at application level

INSERT INTO order_statuses (name) VALUES ('pending'), ('processing'), ('shipped'), ('delivered'), ('cancelled');

CREATE TABLE orders ( id SERIAL PRIMARY KEY, status_id INTEGER REFERENCES order_statuses(id) ); Prefer over ENUM for flexibility; easier to add/modify values

CREATE TABLE user_roles ( user_id INTEGER REFERENCES users(id) ON DELETE CASCADE, role_id INTEGER REFERENCES roles(id) ON DELETE CASCADE, granted_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (user_id, role_id) ); </schema_design>

<query_optimization> <explain_analysis> Understand query execution plans -- PostgreSQL EXPLAIN SELECT * FROM users WHERE email = 'test@example.com'; EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'test@example.com'; EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) SELECT ...;

-- MySQL EXPLAIN SELECT * FROM users WHERE email = 'test@example.com'; EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'test@example.com'; -- MySQL 8.0.18+

-- SQLite EXPLAIN QUERY PLAN SELECT * FROM users WHERE email = 'test@example.com';

-- Key metrics to watch cost=startup..total -- Estimated cost units rows=N -- Estimated row count actual time=X..Y -- Real execution time (with ANALYZE) loops=N -- Number of iterations </explain_analysis>

<index_strategies> Index contains all columns needed by query -- Query only needs email and name SELECT email, name FROM users WHERE email LIKE 'a%';

-- Covering index avoids table lookup CREATE INDEX idx_users_email_name ON users(email, name);

-- Leftmost prefix rule: this index supports: -- WHERE user_id = ? -- WHERE user_id = ? AND status = ? -- But NOT: WHERE status = ?

-- Only index recent orders CREATE INDEX idx_recent_orders ON orders(created_at) WHERE created_at > '2024-01-01'; <use_case>When queries always filter by same condition</use_case> </index_strategies>

<common_optimizations> Select only needed columns -- Bad: fetches all columns SELECT * FROM users WHERE id = 1;

-- Good: only needed columns SELECT id, name, email FROM users WHERE id = 1;

-- Good: stops at first match SELECT EXISTS(SELECT 1 FROM orders WHERE user_id = 1);

-- Good: batch insert INSERT INTO logs (message) VALUES ('log1'), ('log2'), ('log3');

-- Good: batch update with CASE UPDATE products SET price = CASE id WHEN 1 THEN 10.00 WHEN 2 THEN 20.00 WHEN 3 THEN 30.00 END WHERE id IN (1, 2, 3);

-- Keyset pagination (efficient for large datasets) SELECT * FROM orders WHERE id > 1000 -- last seen ID ORDER BY id LIMIT 20;

-- Cursor-based with composite key SELECT * FROM orders WHERE (created_at, id) > ('2024-01-01', 1000) ORDER BY created_at, id LIMIT 20; Keyset pagination is O(1); offset pagination is O(n)

-- Good: UNION allows index usage on each condition SELECT * FROM users WHERE email = 'a@b.com' UNION SELECT * FROM users WHERE name = 'John'; </common_optimizations> </query_optimization>

-- Transaction fails if constraint violated BEGIN; UPDATE accounts SET balance = balance - 1000 WHERE id = 1; -- Fails if balance < 1000 COMMIT;

BEGIN; -- Protected from concurrent modifications SELECT balance FROM accounts WHERE id = 1; UPDATE accounts SET balance = balance - 100 WHERE id = 1; COMMIT;

<isolation_levels> Lowest isolation; can read uncommitted changes Dirty reads, non-repeatable reads, phantom reads <use_case>Rarely used; only for approximate counts/analytics</use_case>

<decision_tree name="isolation_selection"> What is the consistency requirement? <if_yes condition="Approximate data acceptable">READ UNCOMMITTED</if_yes> <if_yes condition="Standard OLTP">READ COMMITTED (default)</if_yes> <if_yes condition="Report consistency needed">REPEATABLE READ</if_yes> <if_yes condition="Critical financial/inventory">SERIALIZABLE</if_yes> </decision_tree> </isolation_levels>

<locking_patterns> Lock specific rows for update -- PostgreSQL/MySQL BEGIN; SELECT * FROM accounts WHERE id = 1 FOR UPDATE; -- Row is locked until COMMIT UPDATE accounts SET balance = balance - 100 WHERE id = 1; COMMIT;

-- NOWAIT: fail immediately if locked SELECT * FROM accounts WHERE id = 1 FOR UPDATE NOWAIT;

-- SKIP LOCKED: skip locked rows (queue processing) SELECT * FROM jobs WHERE status = 'pending' FOR UPDATE SKIP LOCKED LIMIT 1;

-- Transaction-level lock (auto-released on commit) SELECT pg_advisory_xact_lock(12345);

-- Try lock (non-blocking) SELECT pg_try_advisory_lock(12345); -- returns true/false <use_case>Distributed locks, rate limiting, singleton processes</use_case>

-- Read with version SELECT id, name, price, version FROM products WHERE id = 1; -- version = 5

-- Update with version check UPDATE products SET price = 29.99, version = version + 1 WHERE id = 1 AND version = 5;

-- If rows affected = 0, conflict occurred -> retry or error <use_case>Low-contention updates, web applications</use_case>

<deadlock_prevention> Always acquire locks in same order -- Always lock lower ID first BEGIN; SELECT * FROM accounts WHERE id = 1 FOR UPDATE; SELECT * FROM accounts WHERE id = 2 FOR UPDATE; -- Transfer... COMMIT;

-- MySQL SET innodb_lock_wait_timeout = 5;

-- Sequential 001_create_users_table.sql 002_add_email_to_users.sql Timestamp-based prevents conflicts in team environments

-- +migrate Up CREATE TABLE users ( id SERIAL PRIMARY KEY, email VARCHAR(255) NOT NULL );

-- +migrate Down DROP TABLE users;

CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);

ALTER TABLE users ADD COLUMN IF NOT EXISTS name VARCHAR(100);

<zero_downtime> Add nullable column first, then populate -- Step 1: Add nullable column (instant in PostgreSQL 11+) ALTER TABLE users ADD COLUMN phone VARCHAR(20);

-- Step 2: Backfill data (in batches) UPDATE users SET phone = '' WHERE phone IS NULL AND id BETWEEN 1 AND 1000;

-- Step 3: Add NOT NULL constraint ALTER TABLE users ALTER COLUMN phone SET NOT NULL;

-- Older versions: requires table rewrite -- Use nullable + backfill + NOT NULL pattern instead

-- Step 2: Copy data (in batches) UPDATE users SET full_name = name WHERE full_name IS NULL;

-- Step 3: Deploy code reading both columns

-- Step 4: Deploy code writing to both columns

-- Step 5: Deploy code reading only new column

-- Step 6: Drop old column ALTER TABLE users DROP COLUMN name;

-- Note: Cannot run inside transaction -- May take longer but allows concurrent reads/writes CONCURRENTLY can fail; check index is valid after creation

-- Step 2: Deploy and wait for old code to drain

-- Step 3: Drop column ALTER TABLE users DROP COLUMN old_column; </zero_downtime>

<data_migration> Process large datasets in chunks -- Process in batches of 1000 DO $$

DECLARE batch_size INTEGER := 1000; rows_updated INTEGER; BEGIN LOOP UPDATE users SET email_normalized = LOWER(email) WHERE email_normalized IS NULL AND id IN ( SELECT id FROM users WHERE email_normalized IS NULL LIMIT batch_size );

    GET DIAGNOSTICS rows_updated = ROW_COUNT;
    EXIT WHEN rows_updated = 0;

    COMMIT;
    PERFORM pg_sleep(0.1);  -- Reduce load
END LOOP;

END $$;

FETCH 1000 FROM batch_cursor; -- Process batch -- Repeat until no more rows </data_migration>

<context7_integration> Use Context7 MCP for up-to-date SQL documentation

<sql_libraries> </sql_libraries>

<usage_patterns> resolve-library-id libraryName="postgresql" get-library-docs context7CompatibleLibraryID="/websites/postgresql" topic="window functions"

<anti_patterns> Using SELECT * in production queries Explicitly list required columns for performance and clarity

<best_practices> Use parameterized queries to prevent SQL injection Create indexes on foreign keys and frequently filtered columns Use transactions for multi-statement operations Analyze query plans with EXPLAIN before optimizing Use appropriate isolation levels for transaction requirements Implement soft deletes for audit trails Name constraints explicitly for easier migration management Prefer keyset pagination over offset for large datasets Use CTEs for complex query readability Batch large data modifications to reduce lock contention Test migrations on production-like data before deployment </best_practices>

<error_escalation> Missing index on infrequently queried column Note for future optimization, proceed Query performance degradation under load Analyze EXPLAIN output, propose index or query optimization Deadlock or lock timeout in production Stop, analyze lock patterns, present resolution options Data corruption or SQL injection vulnerability Block operation, require immediate remediation </error_escalation>

<related_agents> Database design, ER diagrams, migration planning Query optimization, index analysis SQL injection prevention, access control </related_agents>

<related_skills> Navigate database schema and find query patterns Fetch PostgreSQL, MySQL, SQLite documentation Debug query performance issues </related_skills>