onepassword-cli-coder
This skill guides integrating 1Password CLI (op) for secret management in development workflows. Use when loading secrets for infrastructure, deployments, or local development.
$ インストール
git clone https://github.com/majesticlabs-dev/majestic-marketplace /tmp/majestic-marketplace && cp -r /tmp/majestic-marketplace/plugins/majestic-devops/skills/onepassword-cli-coder ~/.claude/skills/majestic-marketplace// tip: Run this command in your terminal to install the skill
name: onepassword-cli-coder description: This skill guides integrating 1Password CLI (op) for secret management in development workflows. Use when loading secrets for infrastructure, deployments, or local development. allowed-tools: Read, Write, Edit, Bash, Grep, Glob
1Password CLI Coder
Overview
The 1Password CLI (op) provides secure secret injection into development workflows without exposing credentials in code, environment files, or shell history.
Core Patterns
Secret Reference Format
op://<vault>/<item>/<field>
Examples:
op://Development/AWS/access_key_id
op://Production/Database/password
op://Shared/Stripe/secret_key
Item Naming Conventions
Use {environment}-{service} format for item names:
| Pattern | Example | Notes |
|---|---|---|
{env}-{service} | production-rails | Primary app secrets |
{env}-{provider} | production-dockerhub | External service credentials |
{env}-{provider}-{resource} | production-hetzner-s3 | Provider with multiple resources |
DO:
- Use kebab-case (no spaces, no underscores)
- Prefix with environment (
production-,staging-,development-) - Keep names descriptive but concise
DON'T:
- Use spaces in item names (
Production Rails→production-rails) - Use generic names (
API Key→production-stripe) - Mix environments in one item (create separate items per environment)
Field Naming
Use semantic field names that describe the credential type:
| Good | Bad | Why |
|---|---|---|
access_token | value | Self-documenting |
master_key | secret | Specific purpose clear |
secret_access_key | key | Matches AWS naming |
api_token | token | Distinguishes from other tokens |
Field naming rules:
- Match the provider's terminology when possible (AWS uses
access_key_id,secret_access_key) - Use snake_case for consistency
- Be specific:
database_passwordnot justpasswordwhen item has multiple credentials
Environment File (.op.env)
Create .op.env in project root:
# AWS credentials
AWS_ACCESS_KEY_ID=op://Infrastructure/AWS/access_key_id
AWS_SECRET_ACCESS_KEY=op://Infrastructure/AWS/secret_access_key
AWS_REGION=op://Infrastructure/AWS/region
# DigitalOcean
DIGITALOCEAN_TOKEN=op://Infrastructure/DigitalOcean/api_token
# Database
DATABASE_URL=op://Production/PostgreSQL/connection_string
# API Keys
STRIPE_SECRET_KEY=op://Production/Stripe/secret_key
OPENAI_API_KEY=op://Development/OpenAI/api_key
Critical: Add to .gitignore:
# 1Password - NEVER commit
.op.env
*.op.env
Running Commands with Secrets
# Single command
op run --env-file=.op.env -- terraform plan
# With environment variable prefix
op run --env-file=.op.env -- rails server
# Inline secret reference
op run -- printenv DATABASE_URL
Integration Patterns
Makefile Integration
OP ?= op
OP_ENV_FILE ?= .op.env
# Prefix for all commands needing secrets
CMD = $(OP) run --env-file=$(OP_ENV_FILE) --
deploy:
$(CMD) kamal deploy
console:
$(CMD) rails console
migrate:
$(CMD) rails db:migrate
Docker Compose
# docker-compose.yml
services:
app:
build: .
environment:
- DATABASE_URL=${DATABASE_URL}
- REDIS_URL=${REDIS_URL}
# Run with secrets injected
op run --env-file=.op.env -- docker compose up
Kamal Deployment
# config/deploy.yml
env:
secret:
- RAILS_MASTER_KEY
- DATABASE_URL
- REDIS_URL
# .kamal/secrets (loaded by Kamal)
RAILS_MASTER_KEY=$(op read "op://Production/Rails/master_key")
DATABASE_URL=$(op read "op://Production/PostgreSQL/url")
REDIS_URL=$(op read "op://Production/Redis/url")
CI/CD (GitHub Actions)
# .github/workflows/deploy.yml
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: 1password/load-secrets-action@v2
with:
export-env: true
env:
OP_SERVICE_ACCOUNT_TOKEN: ${{ secrets.OP_SERVICE_ACCOUNT_TOKEN }}
AWS_ACCESS_KEY_ID: op://CI/AWS/access_key_id
AWS_SECRET_ACCESS_KEY: op://CI/AWS/secret_access_key
- run: terraform apply -auto-approve
CLI Commands
Reading Secrets
# Read single field
op read "op://Vault/Item/field"
# Read with output format
op read "op://Vault/Item/field" --format json
# Read to file (for certificates, keys)
op read "op://Vault/TLS/private_key" > /tmp/key.pem
chmod 600 /tmp/key.pem
Injecting into Commands
# Single secret inline
DATABASE_URL=$(op read "op://Production/DB/url") rails db:migrate
# Multiple secrets via env file
op run --env-file=.op.env -- ./deploy.sh
# With account specification
op run --account my-team --env-file=.op.env -- terraform apply
Managing Items
# List vaults
op vault list
# List items in vault
op item list --vault Infrastructure
# Get item details
op item get "AWS" --vault Infrastructure
# Create item
op item create \
--category login \
--vault Infrastructure \
--title "New Service" \
--field username=admin \
--field password=secret123
Project Setup
Initial Configuration
# Sign in (creates session)
op signin
# Verify access
op vault list
# Create project env file
cat > .op.env << 'EOF'
# Infrastructure secrets
AWS_ACCESS_KEY_ID=op://Infrastructure/AWS/access_key_id
AWS_SECRET_ACCESS_KEY=op://Infrastructure/AWS/secret_access_key
# Application secrets
DATABASE_URL=op://Production/Database/url
REDIS_URL=op://Production/Redis/url
EOF
# Test secret loading
op run --env-file=.op.env -- env | grep -E '^(AWS|DATABASE|REDIS)'
Placeholder Workflow
Create items with placeholder values upfront, populate with real credentials later:
# 1. Create item with placeholder values
op item create \
--vault myproject \
--category login \
--title "production-rails" \
--field master_key="PLACEHOLDER_UPDATE_BEFORE_DEPLOY"
# 2. Create .kamal/secrets referencing the item
cat > .kamal/secrets << 'EOF'
RAILS_MASTER_KEY=$(op read "op://myproject/production-rails/master_key")
EOF
# 3. Update deployment docs to match
# docs/DEPLOYMENT.md should reference same paths
# 4. Later: Update with real value
op item edit "production-rails" \
--vault myproject \
master_key="actual_secret_value_here"
Benefits:
- Infrastructure code can be written before credentials exist
- All secret paths documented in code/docs from the start
- Reduces "forgot to update the docs" friction during deployment
- Team members can see what secrets are needed without having access to values
Documentation Sync:
Keep .kamal/secrets (or equivalent) and deployment docs in sync:
<!-- docs/DEPLOYMENT.md -->
## Required Secrets
| Secret | 1Password Path | Purpose |
|--------|----------------|---------|
| `RAILS_MASTER_KEY` | `op://myproject/production-rails/master_key` | Decrypt credentials |
| `DOCKERHUB_TOKEN` | `op://myproject/production-dockerhub/access_token` | Pull images |
Vault Organization
Single-Vault Approach (Simpler)
Use one vault with naming conventions for environment separation:
Vault: myproject
Items:
- production-rails
- production-dockerhub
- production-hetzner-s3
- staging-rails
- staging-dockerhub
- development-rails
Benefits:
- Simpler permission management (one vault to configure)
- Item names are self-documenting with environment prefix
- Easier to see all project secrets at a glance
Multi-Vault Approach (Team Scale)
Separate vaults when you need different access controls:
| Vault | Purpose | Access |
|---|---|---|
Infrastructure | Cloud provider credentials | DevOps team |
Production | Production app secrets | Deploy systems |
Staging | Staging environment | Dev team |
Development | Local dev secrets | Individual devs |
Shared | Cross-team API keys | All teams |
When to Use Which:
- Single vault: Solo developer, small team, single project
- Multi-vault: Multiple teams, strict access control requirements, compliance needs
Security Best Practices
DO
- Use
.op.envfiles for project-specific secret mapping - Add all
.op.envvariants to.gitignore - Use service accounts for CI/CD (not personal accounts)
- Scope vault access by team/environment
- Rotate secrets regularly via 1Password
DON'T
- Never commit
.op.envfiles - Never use
op readoutput in logs or echo statements - Never store session tokens in scripts
- Avoid hardcoding vault/item names - use variables
Audit Logging
# Check recent access events
op events-api
# Specific vault events
op audit-events list --vault Production
Troubleshooting
Session Expired
# Re-authenticate
op signin
# Check current session
op whoami
Item Not Found
# Verify vault access
op vault list
# Search for item
op item list --vault Infrastructure | grep -i aws
# Check exact field names
op item get "AWS" --vault Infrastructure --format json | jq '.fields[].label'
Permission Denied
# Check account permissions
op vault list
# Verify specific vault access
op vault get Infrastructure
Multiple Accounts
For managing multiple 1Password accounts (personal + work), use --account flag or OP_ACCOUNT env var:
# Specify account per command
op vault list --account acme.1password.com
# Set default for shell session
export OP_ACCOUNT=acme.1password.com
# With op run
op run --account acme.1password.com --env-file=.op.env -- ./deploy.sh
Key rule: Always specify account in automation scripts - never rely on "last signed in".
See resources/multiple-accounts.md for detailed patterns including cross-account workflows and Makefile integration.
Multi-Environment Pattern
# .op.env.production
DATABASE_URL=op://Production/Database/url
REDIS_URL=op://Production/Redis/url
# .op.env.staging
DATABASE_URL=op://Staging/Database/url
REDIS_URL=op://Staging/Redis/url
# .op.env.development
DATABASE_URL=op://Development/Database/url
REDIS_URL=op://Development/Redis/url
ENV ?= development
OP_ENV_FILE = .op.env.$(ENV)
deploy:
op run --env-file=$(OP_ENV_FILE) -- kamal deploy
# Usage: make deploy ENV=production
Repository
