# SQL Consulting for Oregon Companies — SQL Server, PostgreSQL, and the Migration Between Them

SQL consulting in Oregon means three things in 2026: performance tuning slow queries on production SQL Server and PostgreSQL databases, migrating off Microsoft SQL Server to PostgreSQL to escape per-core licensing fees, and modernizing the Access, FoxPro, and Excel-based "systems" that still run a surprising number of Portland, Salem, Eugene, and Beaverton businesses. FreedomDev has been doing this work for Oregon companies since 2006 — remote-first, flat-rate, with the source code and documentation handed to you at the end. We are not the local Portland firm. We are the database engineering team that finishes the job.

## SQL Consulting for Oregon Companies — SQL Server, PostgreSQL, and the Migration Between Them

SQL consulting in Oregon means three things in 2026: performance tuning slow queries on production SQL Server and PostgreSQL databases, migrating off Microsoft SQL Server to PostgreSQL to escape per-core licensing fees, and modernizing the Access, FoxPro, and Excel-based "systems" that still run a surprising number of Portland, Salem, Eugene, and Beaverton businesses. FreedomDev has been doing this work for Oregon companies since 2006 — remote-first, flat-rate, with the source code and documentation handed to you at the end. We are not the local Portland firm. We are the database engineering team that finishes the job.

---

## Features

### Microsoft SQL Server Consulting for Oregon Companies — All Versions, Including the Ones You Should Already Have Migrated

If you run SQL Server in Oregon, FreedomDev has done deep work on every supported version, plus the legacy versions you probably should not still be running. The honest truth about SQL Server consulting in 2026 is that half the engagements are upgrades from versions that lost extended support years ago. The other half are performance work on current versions that grew faster than the schema was designed for. **SQL Server 2022 and 2019.** Current generation. We design new deployments, configure Always On Availability Groups for high availability, integrate with Azure SQL Managed Instance when the cost math works, and tune the Query Store telemetry that Microsoft introduced in 2016 and finally made usable in 2019. SQL Server 2022 adds Azure Synapse Link, ledger tables for tamper-evident audit, and Intelligent Query Processing improvements that meaningfully change how the optimizer handles row-store batch-mode execution. **SQL Server 2017 and 2016.** Still supported, still in production at most Oregon companies we work with. The 2016 release was the inflection point — columnstore indexes became usable for hybrid OLTP/analytical workloads, JSON support arrived (badly, but it shipped), Row-Level Security gave us multi-tenant isolation without application-layer trickery. We have done several 2014→2016 upgrades for Oregon manufacturers where the driver was Always Encrypted for PCI scope reduction. **SQL Server 2014.** Extended support ended July 9, 2024. If you are still on 2014, you are running unpatched software. CVE-2024-21380 alone is enough reason to plan a migration. We do these migrations — usually to 2022 if you are staying on SQL Server, or to PostgreSQL if you have decided the per-core licensing is no longer justifiable. **SQL Server 2012, 2008 R2, 2008.** End-of-support, period. These show up in three places in Oregon: a single departmental application that nobody wants to touch, a vendor product whose support contract specified a SQL version (and the vendor went out of business), and engineering firms running legacy CAD-data servers. We migrate them. The shape of the engagement is almost always: stand up a parallel modern instance, replicate the legacy data, switch the application connection strings, and decommission the old server during a documented maintenance window. **SQL Server 2005, 2000.** Yes, still encountered. The 2024 cyber insurance market increasingly rejects companies running these. We get called when underwriting raises the question. Migration path is usually direct to SQL Server 2022 or to PostgreSQL — the in-between versions are not worth the second migration cost. **Real-world example.** In the last 12 months, the most common Oregon SQL Server engagement was: tune the slowest 20 queries on a production 2019 instance running an Epicor Kinetic ERP, then design and deploy an Always On AG to a secondary data center for failover. The performance work paid for the AG project on its own — three of the 20 queries were causing 60% of database time, and they were missing indexes that the original DBA never added because the system was "fast enough at launch."

### PostgreSQL Consulting in Oregon — The Wedge Local Microsoft Shops Miss

Progent does not offer PostgreSQL consulting. Adrianne Martin Consulting does not. Row Consulting handles "MySQL, SQL Server, and Oracle" — no PostgreSQL. Brent Ozar Unlimited is explicitly a SQL Server consultancy. **Every other firm ranking for "sql consulting oregon" is a Microsoft-only shop.** That matters because most new database deployments in 2026 are PostgreSQL. AWS RDS PostgreSQL adoption now exceeds RDS for SQL Server. Every cloud-native Oregon SaaS startup we have talked to in the last 18 months is on PostgreSQL. And the migration trade — from SQL Server to PostgreSQL, specifically — is the most common database engagement we run for mid-market Oregon companies. What we do on PostgreSQL: - **Architecture and schema design** for new applications. We make the obvious-in-hindsight choices that prevent the year-three rewrite: proper use of JSONB versus relational columns, partitioning strategy for tables that will cross 100M rows, declarative range partitions for time-series data, BRIN indexes for naturally-ordered data instead of B-tree (10x smaller, faster scans on the right shape of query). - **Performance tuning** using `pg_stat_statements`, `auto_explain`, and EXPLAIN ANALYZE BUFFERS. The standard engagement is a 7-day profile to find the queries that account for 80% of database time, then targeted index and query-plan work to bring those queries under their SLA. We have done this for an Oregon e-commerce company where the cart query was timing out at peak load — root cause was a missing composite index on (user_id, status, updated_at). 45 minutes of index work; checkout went from 8 seconds to 180ms. - **Replication and high availability** with streaming replication for sub-second standby lag plus Patroni for automatic failover. Logical replication for zero-downtime version upgrades — write to PG 15, replicate to a PG 16 standby, validate, switch over, decommission. We run this pattern routinely. - **Connection pooling** with PgBouncer in transaction-pooling mode. The most common production failure mode on PostgreSQL is connection exhaustion — application connection limits set to "high" without understanding that each connection costs ~10MB of memory and that the OS has limits. PgBouncer fixes it. - **Postgres extensions for specific workloads**: PostGIS for any Oregon company doing geospatial work (we have shipped for a Portland logistics firm and a Salem agricultural co-op), TimescaleDB for time-series IoT data, pgvector for AI embeddings and semantic search, pg_partman for automated partition management.

### SQL Server to PostgreSQL Migration — Why Oregon Companies Are Doing It in 2026

The economic case for migrating off SQL Server has tightened every year since Microsoft moved Standard Edition to per-core licensing. For an Oregon mid-market company running a 16-core production SQL Server with one DR replica, the all-in annual cost (SQL Server Standard licensing + Software Assurance + Windows Server + the operational overhead of a Windows-only DBA) lands between $35,000 and $60,000 per year, depending on whether you have an Enterprise Agreement. PostgreSQL is zero licensing. Hosted Postgres (RDS, Aurora, Azure Postgres) costs cover the compute and storage you would pay anyway. Migration is not free. The labor cost of a properly-executed SQL Server → PostgreSQL migration for a 200-table OLTP system runs $40,000 to $120,000 depending on volume of stored procedures, application code touching T-SQL specifics, and tolerance for downtime. The break-even is typically 18–30 months, which is why mid-market companies are doing it now and not in 2019 when the licensing math was less stark. Our migration process, in plain language: 1. **Schema conversion.** Run AWS Schema Conversion Tool (SCT) or `ora2pg` to translate T-SQL DDL to PostgreSQL. Manually fix the SCT output for IDENTITY columns (become SEQUENCEs), data type differences (NVARCHAR → TEXT, MONEY → NUMERIC, DATETIME2 → TIMESTAMPTZ), and CHECK constraint syntax. About 80% of the schema converts cleanly; 20% needs hand-tuning. 2. **Stored procedure conversion.** T-SQL to PL/pgSQL is mechanical for simple procedures and ugly for procedures that use CURSOR, CROSS APPLY, or table-valued parameters. We rewrite, we do not auto-translate. Typical project: 60–80 stored procedures, 2–3 weeks of focused work. 3. **Application code review.** Every T-SQL idiom touched by application code needs a port. The common ones: `TOP n` → `LIMIT n`, `GETDATE()` → `NOW()`, `ISNULL` → `COALESCE`, `IDENTITY` columns referenced in INSERT statements (PostgreSQL handles SEQUENCEs differently), and any code using `OPENJSON` or SQL Server's specific JSON functions (PostgreSQL has its own, similar but not identical). 4. **Data movement.** AWS DMS (Database Migration Service) for cloud-to-cloud, custom Python or SSIS for on-premise, or `pg_dump`/`pg_restore` for PostgreSQL-to-PostgreSQL version moves. Validation is row counts plus checksums on each table after each phase. 5. **Parallel run.** For OLTP systems, we run both databases live for 1–2 weeks. The application writes to both; we compare query results periodically and reconcile any drift. This catches the migration bugs that would otherwise surface as customer-reported issues post-cutover. 6. **Cutover.** Documented maintenance window, application connection string change, monitoring for 7 days post-cutover with the old database in read-only mode as a rollback option. We have run this process three times in the last 18 months for Oregon clients. The most painful was a healthcare-adjacent company with HIPAA audit logging requirements where the audit log schema had to be preserved bit-for-bit for compliance. The fastest was a Beaverton SaaS company who had already containerized everything — schema migration to cutover took 6 weeks.

### Database Performance Tuning — What We Actually Do (Not the Marketing Version)

When a Portland manufacturer calls us and says "the database is slow," that is the start of a diagnostic process, not the start of a sales pitch. The first 30 minutes are spent finding out what "slow" means quantitatively: which query, when, what's the wait type, what changed. **Step 1: Profile.** On SQL Server: Query Store + sys.dm_exec_query_stats + sys.dm_os_wait_stats. On PostgreSQL: `pg_stat_statements` + `pg_stat_user_indexes` + `auto_explain` for queries above a threshold. The goal of this step is to identify the 10–20 queries that account for 80% of database time. Not the queries developers complain about. The queries the database itself reports as expensive. **Step 2: Diagnose.** For each of the top queries, pull the execution plan. On SQL Server, that means `SET STATISTICS IO, TIME ON` + the actual XML plan. On PostgreSQL, that means EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON). We read these plans. Most performance problems show up clearly: a 4-million-row table scan that should be an index seek; a hash join where a nested-loop with the right index would be faster; statistics that have not been updated since the last bulk load and are estimating row counts off by 100x. **Step 3: Fix in order of impact.** Sample diagnostic output from an Oregon engineering services firm last quarter (PostgreSQL 15, 200M-row work orders table): ``` QUERY: SELECT * FROM work_orders WHERE customer_id = $1 AND status IN ('open', 'in_progress') ORDER BY created_at DESC LIMIT 50; BEFORE (no composite index, daily total time 4.2 hours): Limit (cost=412847.32..412847.45 rows=50 width=412) (actual time=8423.18..8423.21 rows=50) -> Sort (cost=412847.32..413091.18 rows=97543 width=412) (actual time=8423.16..8423.18 rows=50) -> Bitmap Heap Scan on work_orders (cost=4127.43..410201.55 rows=97543 width=412) (actual time=89.23..8401.42 rows=98221) AFTER (CREATE INDEX work_orders_customer_status_created_idx ON work_orders(customer_id, status, created_at DESC) WHERE status IN ('open', 'in_progress'), daily total time 4 minutes): Limit (cost=0.56..27.43 rows=50 width=412) (actual time=0.123..0.847 rows=50) -> Index Scan using work_orders_customer_status_created_idx on work_orders (cost=0.56..52429.18 rows=97543 width=412) (actual time=0.121..0.812 rows=50) ``` The work: write the index, deploy in a maintenance window with `CREATE INDEX CONCURRENTLY` so it does not lock the table, validate query plan post-deploy, monitor for 7 days. Total billable: 4 hours. Customer-facing impact: the application's busiest screen went from feeling broken to feeling instant. **Step 4: Document and hand off.** Every performance engagement ships with a written report: queries diagnosed, fixes applied, before/after metrics, and the runbook for when the next one happens. We do not gate-keep our diagnostic process. Your team should be able to identify the next slow query without calling us.

### High Availability, Disaster Recovery, and Backups That Have Been Tested

Most "we have backups" claims in Oregon mid-market companies fail their first real test. Backups complete; restores have never been validated; the documented RTO is fiction. Our HA/DR work covers three pillars: **Backups that actually work.** Daily full + transaction log backups every 15 minutes (SQL Server) or continuous WAL archiving + nightly base backups (PostgreSQL). Backups land in cross-region S3 or Azure Blob with versioning, lifecycle policies, and customer-managed encryption keys. We test restore quarterly. If your backup has never been restored to a fresh instance, you do not have backups; you have hope. **High availability appropriate to your downtime tolerance.** For RPO under 1 minute and RTO under 5 minutes, we deploy SQL Server Always On Availability Groups with synchronous secondary, or PostgreSQL streaming replication with synchronous_commit = on for the critical replica. For RPO under 1 second, we add a synchronous remote standby — this costs latency on every write, which means the application has to be designed for it. **Disaster recovery to a geographically separate region.** Cross-region async replication, validated by quarterly failover drills. The drill is the whole point — if the runbook only works when the engineer who wrote it is in the room, it does not work. We run the drill with your team, not for them. For Oregon companies specifically: most cloud-hosted databases are in us-west-2 (Oregon). For real DR we replicate to us-east-1 or us-east-2 — geographically separate, in case the entire us-west-2 region has an outage (which has happened twice in the last five years).

### Legacy Modernization — Access, FoxPro, Excel "Systems," and the Tribal Knowledge They Hold

A meaningful fraction of Oregon mid-market businesses run business-critical operations on Microsoft Access databases written between 2003 and 2010 by a now-retired internal "Excel guy." Modernizing these is not a Microsoft-versus-PostgreSQL question. It is an archaeology question. What we have migrated in the last two years: - A Portland engineering firm's 14 linked Access databases (cost estimating, project tracking, equipment scheduling) → consolidated SQL Server with a Power BI reporting layer - A Salem manufacturer's 22 years of estimating spreadsheets → structured cost-history database with web UI for estimators - A Beaverton nonprofit's donor records → PostgreSQL + a custom donor portal after their CRM vendor went out of business - A coastal logistics company's FoxPro inventory system → PostgreSQL with an inventory tracking API for their warehouse mobile app The pattern is consistent: profile the source (what's there, what relationships are implicit, what data quality problems are hiding), design a target schema that handles three years of growth instead of just replicating the source, build idempotent migration scripts so the conversion can be re-run when a stakeholder remembers a missed business rule, and run a parallel period where the old system is read-only and the new system handles writes. Cost range: $20,000–$80,000 depending on source complexity and the amount of application development needed to replace the legacy UI.

---

## Benefits

### Flat-rate engagements

Every scope quoted as a fixed price. No hourly billing surprises.

### Source code ownership

Your team owns the code, runbook, and documentation at handoff. We are not a managed-service lock-in.

### 20+ years building production systems

Deep operational experience across SQL Server, PostgreSQL, and the integration layer around them.

### Remote-first since 2009

Standard engagement model. Time-zone overlap, on-call coverage, in-person visits planned per project scope.

---

## Our Process

1. **Discovery call (30–60 min).** — You describe the symptom. We ask the diagnostic questions that distinguish "slow query" from "broken architecture."
2. **Read-only access provisioning.** — AWS IAM role, Azure RBAC entry, or time-limited VPN credential with SELECT-only access to the database. We do not need write access to diagnose.
3. **Profiling week.** — We instrument and gather data. Deliverable is a written diagnosis with the top 3–5 root causes ranked by impact.
4. **Scope and proposal.** — Flat-rate quote per fix with clear deliverables. No hourly billing surprises. No retainer required.
5. **Implementation.** — Changes go through your change-management process. We never deploy directly to production without your sign-off.
6. **Post-deploy validation.** — We monitor for 7 days after each significant change, document the before/after metrics, and hand the runbook to your team. Time-zone overlap with Pacific from our Michigan base: 3 hours, enough for daily standups and emergency response. P0 incidents: 24/7 on-call rotation included for retainer clients.

---

## Key Stats

- **20+**: Years doing database engineering
- **15–25 typical**: Concurrent client engagements supported
- **3**: SQL Server → PostgreSQL migrations shipped (last 18 months)
- **12 TB single-instance PostgreSQL**: Largest production database we have tuned
- **1 written report, top 3–5 root causes, ranked by impact**: Typical first-week diagnostic deliverable

---

## Frequently Asked Questions

### Do I need a database consultant physically located in Oregon?

For 95% of database work in 2026, no. The work involves logging into your database, reading query plans, reviewing application code, and writing migration scripts. None of that requires physical presence. The exception is on-premise data center work involving physical hardware — rack-mounted SAN reconfiguration, on-prem network changes — which is rare for mid-market companies that have moved to cloud or colocation. FreedomDev has done SQL consulting for Oregon companies remotely since 2009 with no client ever asking us to be on-site.

### How is FreedomDev different from Progent, Brent Ozar, or other Oregon SQL consultants?

Three real differences. (1) **Dual-platform**: we are equally strong on SQL Server and PostgreSQL. Most Oregon SQL consultants are Microsoft-only. If you might migrate or are running both, we are the right vendor profile. (2) **Integrated development**: we are a software development company that happens to do database work at expert level. When the database problem is downstream of an application problem (which it usually is), we can fix it at both layers without coordinating with a separate development team. (3) **Pricing transparency**: flat-rate scopes, published price ranges, no retainer required. The other firms quote on request. We quote on the page.

### What is the actual cost difference between SQL Server and PostgreSQL for a mid-market Oregon company?

For a 16-core production server with one DR replica, SQL Server Standard Edition licensing runs $1,793 per core per year (Microsoft list price, Software Assurance bundled into Enterprise Agreement pricing), so 16 cores × 2 servers ≈ $57,376/year in licensing alone. PostgreSQL is $0/year in licensing. Operational cost difference: roughly equivalent (a competent Postgres DBA costs the same as a competent SQL Server DBA; managed services like AWS RDS and Azure Database for PostgreSQL cost similarly). Annual savings from migration: $35,000–$60,000 once the migration is complete. Migration cost: $40,000–$120,000 one-time. Break-even: 18–30 months.

### Can FreedomDev help with SQL Server 2014 end-of-support migrations?

Yes. SQL Server 2014 mainstream support ended July 9, 2019; extended support ended July 9, 2024. Continuing to run 2014 means running unpatched software, which is a security exposure and increasingly a cyber insurance underwriting flag. The migration path we recommend depends on the math: if you have committed to the Microsoft stack and the licensing cost is acceptable, migrate to SQL Server 2022. If the licensing cost is uncomfortable and your application is portable, migrate to PostgreSQL. We have run both paths in the last 12 months for Oregon companies.

### What is the 80/20 rule in SQL? (PAA capture)

The 80/20 rule applied to SQL means roughly 20% of queries in any production database account for 80% of database time. Performance tuning that ignores this and tries to "optimize everything" produces marginal results. Performance tuning that identifies the top-20 queries by total time and fixes them in order of impact produces dramatic results. Every diagnostic engagement FreedomDev runs starts with a profile of the top-20 by total time, not the top-20 by individual slowest execution.

### Is SQL still relevant in 2026? (PAA capture)

SQL is more relevant in 2026 than it was in 2020. The major databases — PostgreSQL, SQL Server, MySQL, Oracle — all remained dominant through the NoSQL wave of the 2010s. PostgreSQL adoption has accelerated specifically because of how well its SQL implementation handles modern workloads (JSONB for semi-structured data, pgvector for AI embeddings, PostGIS for geospatial). Cloud data warehouses (Snowflake, BigQuery, Redshift) standardized on SQL. SQL is the durable abstraction layer; the underlying engines change, the language stays.

### Who is Brent Ozar? (PAA capture — because the question shows up in the SERP)

Brent Ozar runs Brent Ozar Unlimited, a SQL Server consulting and training firm that publishes some of the most-cited content in the SQL Server world (the "How to Think Like the Engine" series, the sp_BlitzFirst diagnostic, wait stats methodology). His Portland-area page ranks at position 4 for "sql consulting oregon" without explicitly targeting the query — the page is 262 words and mostly social proof. He is a credible SQL Server expert. FreedomDev competes on a different axis: dual-platform coverage and integrated development. We have read and recommended his content.

---

**Canonical URL**: https://freedomdev.com/services/sql-consulting/oregon

_Last updated: 2026-05-12_