
The LUNARTECH Superpowers Skills Library is the official catalog of specialized capabilities built for Claude and other AI agents. It is not a prompt library. It is not a list of instructions pasted into a chat window. It is a production-grade collection of 246+ modular Skill packages — each one a self-contained directory containing a SKILL.md, executable scripts, reference documentation, and assets — that teach Claude exactly how to accomplish a specific category of work to professional standards.
The architecture behind every Skill is the same: a YAML frontmatter block that determines when the Skill triggers, a body of procedural knowledge Claude reads when the task is relevant, and optional bundled resources loaded on demand. The result is an AI that arrives at your task already knowing your standards — not one you have to re-brief from scratch every session.
Below is a complete breakdown of every skill category in the library as of March 2026, including what each one does and why it exists.
The foundation of the Superpowers library. These 20 skills govern how Claude thinks, plans, executes, verifies, and improves across all other tasks. They are not domain-specific — they are the operating system on top of which every other Skill runs.
using-superpowers is the entry point. It establishes how Claude finds and uses skills, and requires Skill invocation before any response including clarifying questions. It is the skill that makes all other skills discoverable.
subagent-driven-development and dispatching-parallel-agents handle concurrent execution — the former coordinates independent tasks within a single session, the latter dispatches multiple browser or code agents simultaneously so work that could take hours serially completes in minutes in parallel.
writing-plans and executing-plans separate the thinking from the doing. When a spec or set of requirements arrives, Claude writes a structured plan before touching code. When the plan is approved, a separate Skill governs disciplined execution step by step.
verification-before-completion is one of the most consequential skills in the library. It prevents Claude from declaring success without evidence. Before committing or creating a PR, Claude must run verification commands and confirm actual output — not infer it from the fact that no errors appeared.
systematic-debugging defines the diagnostic approach for any bug, test failure, or unexpected behavior. It requires hypothesis formation and evidence-gathering before proposing fixes — ruling out the single most common failure mode in AI-assisted development: guessing at a fix without understanding the cause.
test-driven-development enforces TDD discipline: tests are written before implementation code, not after. This Skill is triggered any time a feature or bugfix is being implemented.
writing-skills and antigravity-skill-creator cover meta-capabilities: creating new Superpowers skills from scratch, editing existing ones, and verifying they work before deployment.
Additional core skills include brainstorming for structured ideation, brand-identity for identity design and guidelines, receiving-code-review and requesting-code-review for structured review workflows, documenting-work for clear post-task documentation, finishing-a-development-branch for clean merge preparation, project-bootstrapper for initializing new projects with proper structure, using-git-worktrees for isolated feature work, video-to-frames-workflow for scroll-animations and sprite extraction, and ai-website-deployment for shipping AI-powered web applications.
Seventeen skills covering the full spectrum of visual interface design — from component-level UI patterns to full design system implementation to production-ready animation.
phoenix-design-system encodes the Phoenix component library with its guidelines and usage patterns. whisk-product-visuals handles the Whisk design language specifically, ensuring visual consistency across product surfaces. dark-mode-design implements dark mode design systems correctly — not just inverting colors, but rebuilding the entire palette hierarchy for low-light environments.
Component-level skills include badge-pill-components, stats-metrics-display, section-headers, bento-grid-layouts, feature-grid-sections, team-about-sections, and interactive-icon-demos — each encoding the correct implementation patterns for its specific UI element type.
Animation skills cover two distinct paradigms: scroll-based-3d-animations for parallax and scroll-triggered 3D effects, and micro-interactions for the subtle hover states, transitions, and feedback animations that define polished user experiences. glassmorphism-design implements the frosted-glass aesthetic with correct backdrop filters, opacity, and border treatments.
timeline-process-flows handles the design of step-based and chronological visualizations. figma-ai-website-design brings AI-assisted design directly into Figma workflows. ai-website-deployment and video-to-frames-workflow round out the design skill set with production deployment and frame-extraction capabilities.
Eleven skills that encode expert-level knowledge for the workflows every software team encounters regardless of their stack or domain.
git-advanced-workflows covers the operations that separate senior engineers from junior ones: rebasing, cherry-picking, interactive history editing, and complex branching strategies that keep repositories navigable as teams scale.
e2e-testing-patterns brings end-to-end testing discipline with framework-agnostic patterns that apply across Playwright, Cypress, and similar tools. auth-implementation-patterns encodes correct authentication and authorization across common approaches — OAuth, JWT, session-based, API key — with the security considerations that apply to each.
Build tooling is covered by three dedicated skills: bazel-build-optimization for Bazel caching and configuration, turborepo-caching for Turborepo’s remote cache setup and invalidation logic, and nx-workspace-patterns for Nx configuration and task graph management. monorepo-management takes a higher-level view — structuring, tooling, and governance for monorepos as they grow.
code-review-excellence defines what a thorough, constructive code review actually looks like — what to look for, how to phrase feedback, what to approve versus block. debugging-strategies and error-handling-patterns cover systematic diagnosis and robust error architecture respectively. sql-optimization-patterns closes the section with query optimization, index design, and database performance tuning.
Five skills targeting the Python-specific workflows that trip up even experienced developers.
python-packaging covers the full distribution pipeline — building wheels, configuring pyproject.toml, publishing to PyPI and private registries, and managing package metadata correctly. python-testing-patterns encodes pytest best practices: fixture design, parametrize patterns, mocking strategies, and coverage configuration that produces meaningful rather than inflated numbers.
python-performance-optimization covers profiling with cProfile and line_profiler, memory analysis with tracemalloc, and the optimization patterns that actually move the needle — versus the premature optimizations that add complexity without measurable gain. async-python-patterns handles the asyncio ecosystem: event loop management, task coordination, cancellation, and the concurrency patterns that scale cleanly.
uv-package-manager encodes the uv workflow — the Rust-based package manager that resolves dependencies and creates virtual environments at speeds that make pip feel slow. For teams that have not yet switched, this Skill accelerates the transition.
Eight skills covering advanced backend architecture patterns — the kind typically found in distributed systems and event-driven platforms.
cqrs-implementation encodes Command Query Responsibility Segregation correctly — separate read and write models, command handlers, query handlers, and the infrastructure plumbing that connects them without coupling the models together. event-store-design handles the persistence layer underneath: append-only event logs, snapshot strategies, and the projection patterns that rebuild state from events.
projection-patterns goes deeper on the read side — how projections are built, how they handle replays, and how they stay consistent as event schemas evolve. saga-orchestration addresses distributed transactions using the Saga pattern: compensating transactions, orchestrator design, and the failure modes that make distributed consistency hard.
temporal-python-testing covers testing specifically for Temporal workflows and activities in Python — a non-trivial problem given Temporal’s execution model. workflow-orchestration-patterns takes a broader view across orchestration systems. microservices-patterns encodes service decomposition, inter-service communication, and the operational patterns (circuit breakers, bulkheads, retries) that keep microservice architectures stable. architecture-patterns covers DDD, hexagonal architecture, and clean architecture at the design level.
Four skills for the tooling stack that moves, transforms, and validates data at scale.
airflow-dag-patterns encodes correct Apache Airflow DAG design — operator selection, XCom usage, dynamic DAG generation, and the common antipatterns (top-level code with side effects, large task granularity) that cause Airflow installations to become unmaintainable. dbt-transformation-patterns covers dbt model design, testing strategies, incremental materialization, and documentation-as-code.
spark-optimization addresses the performance layer of Apache Spark: partition sizing, shuffle minimization, broadcast joins, and the memory configuration that prevents OOM failures on large jobs. data-quality-frameworks encodes systematic data validation — expectation suites, anomaly detection, and monitoring pipelines that catch data quality issues before they propagate downstream.
The postgresql Skill is a comprehensive guide to PostgreSQL beyond basic SQL: schema design with proper normalization, indexing strategies (B-tree, GIN, BRIN, partial indexes), query planning and EXPLAIN ANALYZE interpretation, partitioning, replication configuration, and the advanced features (CTEs, window functions, JSONB, full-text search) that differentiate PostgreSQL from a generic relational database.
Four production-grade skills for the most universal file formats in professional environments.
docx creates and manipulates Microsoft Word documents programmatically using the correct XML-level approach — not text with fake formatting, but fully spec-compliant .docx files that open correctly in Word, Google Docs, and LibreOffice. It handles heading hierarchies, native numbering, tables, embedded images, tracked changes, headers and footers, footnotes, and hyperlinks.
pdf covers the full PDF operation set: text and table extraction with pdfplumber, OCR on scanned documents with pytesseract, merging, splitting, rotating, watermarking, form filling with field coordinate identification, encryption, and creation with reportlab. Edge cases — non-fillable forms, large file merges, image-heavy documents — are handled explicitly.
xlsx handles Excel programmatically with openpyxl for formula-preserving output and pandas for bulk data operations. It enforces the core principle that calculated values are always formulas, never hardcoded numbers. Financial model color conventions, number formatting standards, and automated formula error scanning via LibreOffice recalculation are all built in.
pptx manages the full PowerPoint lifecycle — reading with markitdown and visual thumbnail inspection, editing via XML manipulation of the unpacked archive, and creating new presentations with pptxgenjs. The Skill ships detailed design guidance to prevent the visual antipatterns that make AI-generated slides identifiable as AI-generated: accent lines under titles, text-only slides, default color palettes, uneven element spacing.
Four skills for research, connectivity, and content creation workflows.
lead-research-assistant provides structured deep research methodology for complex topics — source evaluation, synthesis, and structured output that goes beyond surface-level summaries. connect-apps handles third-party application connectivity and integration workflows. image-enhancer applies AI-powered upscaling and enhancement to images. domain-name-brainstormer provides structured methodology for evaluating domain names — availability, brandability, SEO considerations, and competitive landscape.
The largest skill category in the library, spanning scientific computing libraries, bioinformatics platforms, research databases, and academic writing workflows. This section covers 80+ specialized skills.
The visualization stack includes matplotlib for publication-quality static plots with correct axis formatting and figure sizing for journal submission, plotly for interactive dashboards and Dash applications, and seaborn for statistical graphics with correct application of its categorical and distribution plot types.
For data manipulation at scale: polars for high-performance DataFrame operations using lazy evaluation and the query optimizer, and vaex for out-of-core analysis of datasets with billions of rows that exceed available RAM — using memory-mapped files rather than loading everything into memory.
The mathematical and statistical layer includes sympy for symbolic algebra, calculus, and matrix operations; statsmodels for OLS, GLM, mixed-effects models, and ARIMA with full diagnostic output; and scikit-learn for the complete machine learning workflow — preprocessing, model selection, cross-validation, pipelines, and evaluation.
Deep learning is covered by pytorch-lightning for structured training loops with correct callback design, transformers for the Hugging Face ecosystem across NLP, vision, audio, and multimodal tasks, and torch_geometric (PyG) for Graph Neural Networks applied to node classification, link prediction, and molecular property prediction. torchdrug specializes in drug discovery and protein modeling with PyTorch-native GNNs.
Reinforcement learning is handled by stable-baselines3 for standard RL algorithms and pufferlib for scalable multi-environment training. Model interpretability is served by shap for SHAP value computation and visualization. Dimensionality reduction uses umap-learn for 2D/3D embedding of high-dimensional data prior to visualization or clustering.
Graph and network analysis uses networkx for construction, analysis, and visualization of complex networks. Simulation is handled by simpy for discrete event simulation. Cloud-scale array storage uses zarr-python for chunked N-dimensional arrays with S3 and GCS integration and parallel I/O.
Quantum computing is covered by three skills: pennylane for quantum machine learning and variational circuits, qiskit for IBM-native quantum circuit design and algorithm implementation, and qutip for open quantum systems simulation and density matrix evolution.
Probabilistic programming uses pymc for Bayesian model specification and MCMC inference. Multi-objective optimization uses pymoo. Materials science uses pymatgen for crystal structure analysis and property computation. Computational chemistry uses rowan. Numerical computing uses matlab. GPU-scale ML jobs are deployed with modal.
Single-cell genomics is covered by scanpy for the standard scRNA-seq analysis pipeline (QC, normalization, clustering, trajectory inference), scvi-tools for deep probabilistic models (scVI, scANVI, totalVI) applied to multi-modal single-cell data, and cellxgene-census for querying the CZ CELLxGENE Census — the largest standardized single-cell dataset collection available. pydeseq2 handles differential expression analysis with the DESeq2 statistical framework implemented in Python.
Genomic data processing uses pysam for reading and manipulating SAM/BAM alignment files directly. Mass spectrometry data is handled by pyopenms for the OpenMS framework. Metabolomics datasets are accessed via metabolomics-workbench-database.
Computational pathology and medical imaging are covered by pathml for whole-slide image analysis and pydicom for reading, writing, and processing DICOM files. Clinical data processing and healthcare ML use pyhealth.
Physiological signal processing is handled by neurokit2 for ECG, EEG, and EDA data. Electrophysiology recordings from Neuropixels probes are analyzed with the neuropixels-analysis Skill. Microscopy image management integrates with OMERO via omero-integration.
Drug discovery chemistry uses rdkit for cheminformatics and molecular analysis, molfeat for molecular featurization for ML pipelines, and medchem for medicinal chemistry workflows and drug-likeness evaluation (Lipinski, ADMET properties). Benchmark datasets for therapeutics ML come from pytdc via the Therapeutics Data Commons.
Laboratory automation is handled by two skills: pylabrobot for hardware-agnostic liquid handling robot programming and opentrons-integration specifically for Opentrons OT-2 and Flex robots. Cloud bioinformatics workflows run on latchbio-integration via the Latch Bio platform.
Fifteen database skills provide direct, correctly-formatted access to the most important scientific data sources.
pubmed-database queries NCBI PubMed via the Entrez API for biomedical literature search and retrieval. openalex-database accesses the OpenAlex scholarly graph — works, authors, institutions, concepts, and citation networks — via its REST API. research-lookup and perplexity-search provide AI-powered research summarization and web-grounded scientific search.
Structural biology data comes from pdb-database for 3D macromolecular structures from the RCSB Protein Data Bank. Protein sequence and annotation data comes from uniprot-database via the UniProt REST API. Protein-protein interactions come from string-database, which covers 59 million proteins and 20 billion interactions.
Pathway biology uses reactome-database for curated biological pathway data and enrichment analysis. Target-disease associations for drug discovery come from opentargets-database. Chemical compound data is accessed via pubchem-database for PubChem and chembl-database for ChEMBL bioactivity data. Virtual screening compound libraries use zinc-database, which covers 230 million purchasable compounds. Drug and drug target data comes from drugbank-database. Patent and IP searches use uspto-database via USPTO APIs. Scientific protocol sharing uses protocolsio-integration for accessing and publishing on protocols.io.
Eighteen skills cover the research output pipeline from hypothesis to publication.
scientific-writing encodes the standards of clear, rigorous scientific manuscript writing — hypothesis framing, results reporting, statistical language, and the structural conventions of different journal formats. scientific-visualization produces publication-quality figures with correct axis labels, font sizes, color palettes appropriate for colorblind readers, and resolution specifications for journal submission.
scientific-schematics designs clear mechanistic diagrams and experimental schematics. scientific-slides creates professional conference and seminar presentations. scientific-brainstorming provides structured hypothesis generation and ideation frameworks. scientific-critical-thinking applies formal critical analysis to scientific problems, claims, and study designs.
statistical-analysis guides the full statistical workflow: choosing the correct test, checking assumptions, computing power, running the analysis, and reporting results in APA format. peer-review produces thorough, structured peer reviews covering methodology, results interpretation, and presentation.
scholar-evaluation assesses scholarly work and academic profiles. research-grants structures and writes grant proposals for major funding bodies. venue-templates provides ready-to-use LaTeX templates formatted to the exact specifications of Nature, Science, IEEE, ACM, NeurIPS, ICML, CVPR, CHI, and more.
pptx-posters creates scientific conference posters in PowerPoint format with correct academic poster layout conventions. paper-2-web converts scientific papers into interactive web pages. treatment-plans generates structured medical treatment plans in LaTeX/PDF across clinical specialties. proposal-cluster-learning writes cluster learning research proposals. textual-metadata-dataset-construction implements the IEEE ICME 2016 methodology for building large-scale image datasets from web sources using textual metadata. offer-k-dense-web implements dense web retrieval for scientific dataset construction.
The final and largest section of the Superpowers library: 832 integration skills for automating third-party applications via the Composio/Rube MCP platform. Every skill follows the same execution model — read SKILL.md for the tool name, then search for current tool schemas before automating tasks — ensuring that automations stay current as APIs evolve.
The CRM and sales category covers Salesforce, HubSpot, Pipedrive, Zoho CRM, and Copper. Communication platforms include Slack, Gmail, Twilio, SendGrid, and Mailchimp. Project management covers Asana, Trello, Jira, Linear, Basecamp, and Notion. Finance and payments use Stripe, Braintree, QuickBooks, Brex, and Xero.
Developer tooling covers GitHub, GitLab, Bitbucket, Buildkite, and Browserbase. Analytics integrations include Google Analytics, Amplitude, Mixpanel, and Segment. File storage and cloud drives use Google Drive, Dropbox, Box, and BunnyCDN. HR and recruiting platforms include Breezy HR, BambooHR, and Gusto.
E-commerce automations cover Booqable, BoxHero, and Bubble. Browser automation and scraping use Browserless, Browser Tool, and BrowseAI. Marketing automation covers Mailchimp, Campaign Cleaner, Brandfetch, and Brightdata.
The 832 integrations span virtually every SaaS category: help desks, CMS platforms, video conferencing, document signing, event management, customer support, logistics, IoT, data warehouses, BI tools, ad platforms, social media networks, email infrastructure, monitoring systems, and more. For any third-party tool a team uses, there is almost certainly a Composio skill for automating it.
246 core skills plus 832 Composio integrations represents something qualitatively different from a prompt template library. Each skill is the encoded output of expertise — the kind of knowledge that took practitioners years to accumulate about what actually works, what the edge cases are, and what separates professional output from amateur output.
When Claude arrives at a task with the right Skill loaded, it does not need to be told how to format a scientific figure for Nature, how to avoid accordion scrollbars in a Temporal workflow test, or how to write a compensating transaction in a Saga. That knowledge is already there. It travels from session to session, from user to user, from task to task.
The library continues to expand as new skills are developed and validated. What is listed here reflects the state of the Superpowers repository as of March 8, 2026.
| Skill | Description |
|---|---|
using-superpowers | Use when starting any conversation — establishes how to find and use skills, requiring Skill tool invocation before ANY response including clarifying questions |
subagent-driven-development | Use when executing implementation plans with independent tasks in the current session |
dispatching-parallel-agents | Dispatch multiple parallel browser/code agents to accomplish tasks concurrently |
writing-plans | Use when you have a spec or requirements for a multi-step task, before touching code |
executing-plans | Use when executing a written plan with multiple steps |
verification-before-completion | Use when about to claim work is complete, fixed, or passing, before committing or creating PRs — requires running verification commands and confirming output before making any success claims |
systematic-debugging | Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes |
test-driven-development | Use when implementing any feature or bugfix, before writing implementation code |
writing-skills | Use when creating new skills, editing existing skills, or verifying skills work before deployment |
antigravity-skill-creator | Use when creating new Antigravity skills from scratch |
brainstorming | Structured brainstorming methodology for ideation and problem-solving sessions |
brand-identity | Brand identity design and guidelines creation |
receiving-code-review | Best practices for receiving and responding to code review feedback |
requesting-code-review | Best practices for requesting code reviews |
documenting-work | Document completed work in a structured and clear format |
finishing-a-development-branch | Use when completing work on a development branch and preparing to merge |
project-bootstrapper | Bootstrap new software projects with proper structure and tooling |
using-git-worktrees | Use when starting feature work that needs isolation from current workspace or before executing implementation plans — creates isolated git worktrees with smart directory selection and safety verification |
video-to-frames-workflow | Extract and process video frames for scroll-animations, sprites, and AI training |
ai-website-deployment | Deploy AI-powered websites and web applications |
| Skill | Description |
|---|---|
phoenix-design-system | Phoenix design system components and guidelines |
badge-pill-components | Design and implement badge and pill UI components |
stats-metrics-display | Design stats and metrics display components |
whisk-product-visuals | Product visual design for the Whisk design language |
team-about-sections | Design team and about page sections |
dark-mode-design | Implement dark mode design systems and components |
scroll-based-3d-animations | Create scroll-triggered 3D animation effects |
glassmorphism-design | Implement glassmorphism design aesthetics |
micro-interactions | Design and implement micro-interaction animations for enhanced UX |
section-headers | Design consistent section header components |
timeline-process-flows | Design timeline and process flow visualizations |
bento-grid-layouts | Design bento-style grid layout components |
feature-grid-sections | Design feature grid section components |
interactive-icon-demos | Create interactive icon demo components |
figma-ai-website-design | AI-assisted website design workflows in Figma |
ai-website-deployment | Deploy AI-designed websites to production |
video-to-frames-workflow | Extract and process video frames for design assets |
| Skill | Description |
|---|---|
git-advanced-workflows | Advanced Git workflows including rebasing, cherry-picking, and complex branching strategies |
e2e-testing-patterns | End-to-end testing patterns and best practices |
bazel-build-optimization | Optimize Bazel build configurations and caching |
monorepo-management | Manage monorepo structures, tooling, and workflows |
nx-workspace-patterns | Nx workspace configuration and best practices |
turborepo-caching | Configure and optimize Turborepo caching |
auth-implementation-patterns | Authentication and authorization implementation patterns |
code-review-excellence | Perform thorough, constructive code reviews |
debugging-strategies | Systematic debugging strategies and techniques |
error-handling-patterns | Implement robust error handling across languages and frameworks |
sql-optimization-patterns | Optimize SQL queries and database performance |
| Skill | Description |
|---|---|
python-packaging | Package Python projects for distribution (PyPI, private registries) |
python-testing-patterns | Python testing patterns with pytest, fixtures, mocking, and coverage |
python-performance-optimization | Profile and optimize Python code for speed and memory efficiency |
async-python-patterns | Async/await patterns and concurrency in Python |
uv-package-manager | Use uv for fast Python package management and virtual environments |
| Skill | Description |
|---|---|
cqrs-implementation | Implement Command Query Responsibility Segregation (CQRS) patterns |
temporal-python-testing | Test Temporal workflows and activities in Python |
projection-patterns | Implement event sourcing projection patterns |
saga-orchestration | Design and implement Saga orchestration for distributed transactions |
workflow-orchestration-patterns | Workflow orchestration design patterns |
microservices-patterns | Microservices architecture patterns and best practices |
event-store-design | Design and implement event stores for event sourcing |
architecture-patterns | Software architecture patterns (DDD, hexagonal, clean architecture) |
| Skill | Description |
|---|---|
data-quality-frameworks | Implement data quality checks, validation, and monitoring frameworks |
airflow-dag-patterns | Design and implement Apache Airflow DAGs |
dbt-transformation-patterns | dbt data transformation patterns and best practices |
spark-optimization | Optimize Apache Spark jobs for performance and efficiency |
| Skill | Description |
|---|---|
postgresql | PostgreSQL schema design, indexing, performance tuning, and advanced features |
| Skill | Description |
|---|---|
pdf | Create, manipulate, and extract content from PDF files |
docx | Create and manipulate Microsoft Word documents programmatically |
xlsx | Create and manipulate Excel spreadsheets programmatically |
pptx | Create and manipulate PowerPoint presentations programmatically |
| Skill | Description |
|---|---|
lead-research-assistant | In-depth research assistance for complex topics |
connect-apps | Connect and integrate third-party applications |
image-enhancer | Enhance and upscale images using AI tools |
domain-name-brainstormer | Brainstorm and evaluate domain names for products and services |
| Skill | Description |
|---|---|
matplotlib | Create publication-quality plots and visualizations with Matplotlib |
plotly | Interactive visualizations with Plotly and Dash |
seaborn | Statistical data visualization with Seaborn |
polars | High-performance DataFrame operations with Polars |
vaex | Process and analyze large tabular datasets (billions of rows) that exceed available RAM |
sympy | Symbolic mathematics in Python — algebraic solving, calculus, matrix operations |
statsmodels | Statistical models (OLS, GLM, mixed models, ARIMA) with detailed diagnostics |
scikit-learn | Machine learning with scikit-learn: classification, regression, clustering, pipelines |
scikit-bio | Bioinformatics analysis with scikit-bio |
scikit-survival | Survival analysis with scikit-survival |
shap | Model interpretability and explainability with SHAP values |
umap-learn | UMAP dimensionality reduction for 2D/3D visualization and clustering preprocessing |
networkx | Graph/network analysis and visualization with NetworkX |
simpy | Discrete event simulation with SimPy |
zarr-python | Chunked N-D arrays for cloud storage with parallel I/O, S3/GCS integration |
transformers | Pre-trained transformer models for NLP, vision, audio, and multimodal tasks |
pytorch-lightning | Structured deep learning with PyTorch Lightning |
torch_geometric | Graph Neural Networks (PyG) for node/graph classification, link prediction, molecular property prediction |
torchdrug | PyTorch-native GNNs for molecules and proteins — drug discovery, protein modeling, retrosynthesis |
stable-baselines3 | Reinforcement learning with Stable Baselines3 |
pufferlib | Scalable reinforcement learning with PufferLib |
pennylane | Quantum machine learning and quantum computing with PennyLane |
qiskit | Quantum computing circuits and algorithms with Qiskit |
qutip | Quantum mechanics simulation with QuTiP |
pymc | Probabilistic programming and Bayesian modeling with PyMC |
pymoo | Multi-objective optimization with pymoo |
pymatgen | Materials science analysis with pymatgen |
matlab | MATLAB programming for numerical computing and simulation |
modal | Run GPU workloads and ML jobs at scale with Modal |
rowan | Computational chemistry with Rowan |
| Skill | Description |
|---|---|
scanpy | Single-cell RNA-seq analysis with Scanpy |
scvi-tools | Deep probabilistic models for single-cell genomics |
cellxgene-census | Query CZ CELLxGENE Census single-cell datasets |
pydeseq2 | Differential expression analysis with PyDESeq2 |
pysam | Read and manipulate SAM/BAM genomic alignment files |
pathml | Computational pathology and whole-slide image analysis |
pydicom | Read, write, and process DICOM medical imaging files |
pyhealth | Clinical data processing and healthcare ML with PyHealth |
neurokit2 | Physiological signal processing (ECG, EEG, EDA) with NeuroKit2 |
neuropixels-analysis | Analyze Neuropixels electrophysiology recordings |
omero-integration | Integrate with OMERO image management platform |
molfeat | Molecular featurization for machine learning |
rdkit | Cheminformatics and molecular analysis with RDKit |
pyopenms | Mass spectrometry data analysis with PyOpenMS |
medchem | Medicinal chemistry workflows and drug-likeness evaluation |
pytdc | Therapeutics Data Commons benchmark datasets |
pylabrobot | Laboratory robotics automation with PyLabRobot |
opentrons-integration | Automate liquid handling with Opentrons robots |
latchbio-integration | Run bioinformatics workflows on Latch Bio |
metabolomics-workbench-database | Access Metabolomics Workbench database for metabolomics data |
| Skill | Description |
|---|---|
pubmed-database | Search and retrieve biomedical literature from PubMed |
openalex-database | Query OpenAlex for scholarly works, authors, and institutions |
pdb-database | Access Protein Data Bank for 3D macromolecular structures |
uniprot-database | Direct REST API access to UniProt for protein sequences and annotations |
string-database | Query STRING API for protein-protein interactions (59M proteins, 20B interactions) |
reactome-database | Pathway analysis using the Reactome database |
opentargets-database | Target-disease association queries via Open Targets |
pubchem-database | Chemical compound data from PubChem |
zinc-database | Access ZINC (230M+ purchasable compounds) for virtual screening and drug discovery |
chembl-database | Bioactivity data for drug discovery from ChEMBL |
drugbank-database | Drug and drug target database access |
uspto-database | Access USPTO APIs for patent/trademark searches and IP analysis |
protocolsio-integration | Access and publish scientific protocols on protocols.io |
perplexity-search | AI-powered research search using Perplexity |
research-lookup | Look up and summarize published research papers |
| Skill | Description |
|---|---|
scientific-writing | Write clear, rigorous scientific manuscripts and reports |
scientific-visualization | Create publication-quality scientific figures and visualizations |
scientific-schematics | Design clear scientific schematics and diagrams |
scientific-slides | Create professional scientific presentation slides |
scientific-brainstorming | Structured scientific ideation and hypothesis generation |
scientific-critical-thinking | Apply critical thinking frameworks to scientific problems |
statistical-analysis | Guided statistical analysis with test selection, assumption checking, power analysis, and APA reporting |
peer-review | Write thorough peer reviews for scientific manuscripts |
scholar-evaluation | Evaluate scholarly work and academic profiles |
research-grants | Write and structure research grant proposals |
venue-templates | LaTeX templates and formatting for Nature, Science, IEEE, ACM, NeurIPS, ICML, CVPR, CHI, and more |
pptx-posters | Create scientific conference posters in PowerPoint format |
paper-2-web | Convert scientific papers to interactive web pages |
treatment-plans | Generate concise medical treatment plans in LaTeX/PDF for all clinical specialties |
proposal-cluster-learning | Write cluster learning research proposals |
textual-metadata-dataset-construction | Construct large-scale image datasets from web sources using textual metadata (IEEE ICME 2016 methodology) |
offer-k-dense-web | Dense web retrieval methodology for scientific datasets |
Located in skills/document-editorial/composio-skills/, there are 832 integration skills for automating third-party apps via the Composio/Rube MCP platform. Each skill automates a specific SaaS tool — always search for current schemas before executing.
| Category | Skills |
|---|---|
| CRM & Sales | salesforce, hubspot, pipedrive, zoho-crm, copper |
| Communication | slack, gmail, twilio, sendgrid, mailchimp |
| Project Management | asana, trello, jira, linear, basecamp, notion |
| Finance | stripe, braintree, quickbooks, brex, xero |
| Developer Tools | github, gitlab, bitbucket, buildkite, browserbase-tool |
| Analytics | google-analytics, amplitude, mixpanel, segment |
| Storage | google-drive, dropbox, box, bunnycdn |
| Marketing | mailchimp, campaign-cleaner, brandfetch, brightdata |
| HR & Recruiting | breezy-hr, bamboo-hr, gusto |
| E-commerce | booqable, boxhero, bubble |
| Browsers & Scraping | browserless, browser-tool, browseai |

