# SQL Consulting in Connecticut

As a leading SQL consulting company in Connecticut, FreedomDev helps businesses like yours harness the full potential of their data. Our seasoned consultants bring extensive expertise in database d...

## Expert SQL Consulting in Connecticut: Unlock Data-Driven Insights

Partner with FreedomDev's seasoned SQL consultants to optimize database performance, streamline operations, and drive business growth in the Nutmeg State.

---

## Features

### Database Performance Analysis Using Wait Statistics and Execution Plans

We analyze SQL Server wait statistics, query execution plans, and performance counters to identify specific bottlenecks affecting your Connecticut operations. For a Hartford insurance company, we discovered that PAGEIOLATCH_SH waits accounted for 73% of query delays, leading us to add 14 strategic indexes and reconfigure their storage subsystem. We use SQL Server's Query Store to capture actual execution plans and runtime statistics across your workload, identifying parameter sniffing issues, missing index recommendations, and suboptimal join strategies. Our analysis includes memory pressure assessment using buffer pool metrics, tempdb contention monitoring, and I/O subsystem performance validation using tools like CrystalDiskMark and DiskSpd. This data-driven approach ensures recommendations address root causes rather than symptoms.

### SQL Server Migration Planning and Zero-Downtime Execution

We plan and execute SQL Server migrations from legacy versions (SQL Server 2008 R2 through current releases) and competitive platforms like Oracle or MySQL with minimal business disruption. A Stamford financial services firm needed to migrate from SQL Server 2012 to SQL Server 2019 while maintaining 24/7 availability for their trading platform. We used log shipping to maintain a hot standby, tested application compatibility against the new version using Database Experimentation Assistant, and executed cutover during a 15-minute low-volume window. Our migration plans include compatibility level testing, deprecated feature remediation, cardinality estimator validation, and rollback procedures. We document every migration step with specific T-SQL scripts and timing estimates based on your actual database sizes and transaction volumes.

### Custom Stored Procedure Optimization and Query Tuning

We refactor poorly performing stored procedures and optimize ad-hoc queries using index analysis, execution plan evaluation, and query rewriting techniques. For a New Haven healthcare provider, we optimized a stored procedure that generated daily census reports—reducing execution time from 4.5 minutes to 11 seconds by eliminating a scalar function in the WHERE clause and introducing a filtered index. Our optimization work includes analyzing implicit conversions that prevent index usage, replacing correlated subqueries with more efficient join operations, and implementing appropriate query hints when the optimizer chooses suboptimal plans. We test all optimizations against production data volumes to ensure improvements scale appropriately as your databases grow.

### High Availability Architecture Using Always On and Clustering

We design and implement SQL Server high availability solutions including Always On Availability Groups, Failover Cluster Instances, and log shipping configurations tailored to your recovery objectives. A manufacturing company with facilities in Bridgeport and Waterbury required automatic failover capability with less than 30 seconds of downtime during server failures. We implemented a three-node Always On Availability Group with synchronous commit to a secondary replica in the same data center and asynchronous commit to a disaster recovery site in Stamford. Our HA implementations include detailed runbooks covering failure scenarios, automatic failover testing procedures, and monitoring alerts for replication lag, synchronization health, and backup verification. We configure listener names and connection string guidance ensuring applications reconnect properly during failover events.

### Database Security Hardening and Compliance Implementation

We implement SQL Server security controls including transparent data encryption, row-level security, dynamic data masking, and audit logging to meet compliance requirements for Connecticut organizations. When a healthcare technology company needed to satisfy HIPAA technical safeguards, we implemented TDE across 12 databases, configured database audit specifications tracking all access to protected health information, and implemented row-level security policies restricting access based on Active Directory group membership. Our security implementations include vulnerability assessments using SQL Server's built-in security scanner, service account privilege reviews following least-privilege principles, and network security recommendations covering encryption protocols and firewall rules. We provide compliance documentation mapping technical controls to specific regulatory requirements including HIPAA, PCI-DSS, and SOC 2.

### ETL Pipeline Development Using SSIS and Custom Integration Logic

We build SQL Server Integration Services packages and custom ETL pipelines that move data between systems while maintaining data quality and referential integrity. A distribution company in Windsor needed to synchronize data from five regional SQL databases into a central data warehouse supporting executive dashboards and operational reporting. We developed SSIS packages that process 3.2 million rows nightly, implementing change data capture to identify modified records and reduce processing time by 76%. Our ETL solutions include error handling with alerting, data validation rules enforcing business logic, incremental load patterns minimizing processing windows, and detailed logging for troubleshooting. For complex transformation requirements beyond SSIS capabilities, we develop custom .NET code leveraging SqlBulkCopy for high-performance data loading.

### Real-Time Integration Architecture Using Service Broker and Change Tracking

We implement real-time data integration solutions using SQL Server Service Broker, Change Tracking, and Change Data Capture for scenarios requiring immediate data synchronization. A Greenwich-based wealth management firm needed portfolio positions updated across three systems within two seconds of trade execution. We designed a Service Broker architecture that reliably delivers messages between SQL instances even during network interruptions, maintaining exactly-once delivery semantics critical for financial accuracy. Our integration patterns combine with the approaches documented in our [quickbooks integration](/services/quickbooks-integration) work, adapted for SQL-to-SQL scenarios. These implementations include message retention policies, poison message handling, and monitoring dashboards showing queue depths and processing rates.

### Database Monitoring and Proactive Performance Management

We implement comprehensive SQL Server monitoring using a combination of native DMVs, Extended Events, and third-party tools configured to alert before performance degradation affects users. For a clinical research organization in Farmington, we configured monitoring tracking blocking chains exceeding 5 seconds, query execution times in the 95th percentile, and memory pressure indicators like pending memory grants. Our monitoring implementations capture baseline performance metrics, establish thresholds based on your actual usage patterns, and provide alerting through email, SMS, or integration with platforms like PagerDuty. We configure retention policies balancing historical analysis needs against storage costs, typically maintaining detailed metrics for 30 days and aggregated data for 13 months.

---

## Benefits

### 42% Average Query Performance Improvement

Based on our Connecticut SQL consulting engagements over the past four years, clients experience average query performance improvements of 42% through targeted index optimization, query refactoring, and server configuration tuning.

### Same-Day Availability for Critical Issues

Connecticut organizations experiencing database emergencies—production outages, corruption issues, or critical performance degradation—receive same-day response from our senior database architects with 15+ years of SQL Server experience.

### Zero Data Loss During Migrations

Our systematic migration approach combining log shipping, parallel validation, and comprehensive testing has achieved zero data loss across 47 database migrations for Connecticut clients since 2019.

### Industry-Specific Database Expertise

We understand Connecticut's key industries including insurance policy administration systems, pharmaceutical research databases, precision manufacturing quality systems, and financial services platforms—delivering solutions addressing sector-specific requirements.

### Detailed Documentation and Knowledge Transfer

Every engagement includes comprehensive documentation of database architecture decisions, optimization rationale, and maintenance procedures—plus hands-on training ensuring your Connecticut team maintains improvements after our engagement concludes.

### Compliance-Ready Technical Controls

Our SQL Server security implementations provide the technical controls and audit documentation Connecticut organizations need for HIPAA, PCI-DSS, SOC 2, and ISO 27001 compliance validation.

---

## Our Process

1. **Database Health Assessment and Performance Baseline** — We begin every SQL consulting engagement by establishing your current database performance baseline using DMV queries, Query Store analysis, and Extended Events capturing actual workload patterns. For Connecticut clients, this includes reviewing your SQL Server configuration against best practices, analyzing wait statistics identifying resource bottlenecks, and documenting query patterns using Profiler or Extended Events. We capture execution plans for resource-intensive queries, review index fragmentation and statistics age, and evaluate tempdb configuration. This assessment typically requires 3-5 days and produces a prioritized list of optimization opportunities ranked by expected impact and implementation effort.
2. **Optimization Strategy Development and Testing** — Based on assessment findings, we develop a specific optimization strategy addressing your highest-impact performance issues through index improvements, query refactoring, or configuration changes. We create test environments mirroring your production database schema and load characteristics, implement proposed changes, and validate performance improvements using your actual query workload. For a New Haven healthcare provider, we tested index additions on a restored copy of their 340GB production database using Query Store to replay their actual workload. We measure improvement using specific metrics like query execution time, logical reads, and CPU consumption—ensuring changes deliver meaningful results before production implementation.
3. **Phased Implementation with Rollback Planning** — We implement database optimizations during maintenance windows using a phased approach that allows validation between changes. Each change includes documented rollback procedures—for example, DROP INDEX scripts corresponding to CREATE INDEX statements. For Connecticut manufacturing clients operating 24/7, we coordinate implementations during planned downtime or implement changes online using ONLINE index operations on Enterprise Edition. We monitor key performance indicators immediately after changes, comparing metrics against baseline measurements. If performance doesn't improve as expected, we execute rollback procedures and reassess our approach using production data that may reveal differences from test environments.
4. **Monitoring Implementation and Alert Configuration** — Following optimization implementation, we configure monitoring tracking key performance indicators specific to your workload including query execution times, blocking chains, wait statistics, and resource utilization trends. We establish alert thresholds based on your baseline metrics, typically alerting when performance degrades 30% beyond normal operating ranges. For a Stamford financial services firm, we configured alerts for queries exceeding 2 seconds (their 95th percentile baseline was 0.8 seconds), blocking lasting more than 5 seconds, and CPU utilization exceeding 85% for more than 10 minutes. Monitoring implementations include custom dashboards showing trends over time and integration with your existing infrastructure monitoring platforms.
5. **Documentation and Knowledge Transfer** — We provide comprehensive documentation covering all optimization work including architectural decisions, configuration changes, index additions with supporting rationale, and maintenance procedures. Documentation includes specific T-SQL scripts for ongoing maintenance tasks, monitoring queries your team can execute for troubleshooting, and explanations connecting database design decisions to your business requirements. For Connecticut clients, we conduct knowledge transfer sessions with your IT staff covering optimization techniques, troubleshooting approaches, and maintenance best practices. We remain available for follow-up questions through [all services in Connecticut](/locations/connecticut) as your team assumes ongoing database management responsibilities.
6. **Ongoing Performance Monitoring and Quarterly Reviews** — Many Connecticut clients engage us for quarterly performance reviews analyzing trends in query execution times, resource utilization, and database growth rates. These reviews identify emerging performance issues before they impact users, validate that optimizations maintain effectiveness as data volumes grow, and recommend adjustments to index strategies based on changing query patterns. For a regional health system, quarterly reviews identified that their patient lookup query performance degraded as their database grew from 280GB to 340GB over six months. We added a filtered index on recent patient visits and implemented table partitioning archiving encounters older than five years. These proactive reviews prevent performance degradation and ensure your SQL Server infrastructure scales appropriately with business growth.

---

## Key Stats

- **42%**: Average query performance improvement across Connecticut clients
- **0.9 sec**: Average optimized query time for 10M+ row tables
- **47**: Database migrations completed since 2019 with zero data loss
- **2-4 hrs**: Emergency response time for critical Connecticut database issues
- **20+ years**: SQL Server consulting experience serving Connecticut organizations
- **99.97%**: Uptime achieved for Always On implementations

---

## Frequently Asked Questions

### How quickly can you respond to SQL Server performance emergencies at Connecticut companies?

We provide same-day response for critical SQL Server issues affecting Connecticut operations, typically connecting remotely within 2-4 hours of initial contact for production outages or severe performance degradation. For a Hartford insurance company experiencing complete database unavailability during renewal processing, we diagnosed a corrupted nonclustered index within 45 minutes and restored full operations using DBCC CHECKDB repair options. Our emergency response includes immediate performance data collection using DMVs and Extended Events, collaborative troubleshooting via screen sharing with your team, and detailed incident documentation. We maintain relationships with Connecticut clients through our [contact us](/contact) page where urgent requests receive priority routing to senior database architects.

### What database sizes and transaction volumes can you optimize for Connecticut organizations?

We've optimized SQL Server instances ranging from 50GB departmental databases to multi-terabyte enterprise systems processing thousands of transactions per second. A Stamford financial services firm operates a 4.8TB SQL Server database supporting their wealth management platform with peak loads exceeding 8,000 transactions per second during market open. We analyzed their workload using Query Store data spanning three months, identified the top 50 queries by resource consumption, and implemented index improvements and query refactoring that reduced average CPU time per batch by 34%. Database size impacts our optimization approach—larger databases benefit more from partitioning strategies, filegroup placement, and archiving policies, while high-transaction systems require careful attention to locking contention and tempdb configuration.

### Can you help Connecticut companies migrate from Oracle or MySQL to SQL Server?

Yes, we plan and execute database migrations from Oracle, MySQL, PostgreSQL, and legacy SQL Server versions to current SQL Server releases. A New Haven manufacturer needed to migrate from Oracle 11g to SQL Server 2019 to consolidate their database licensing and simplify their infrastructure. We used SQL Server Migration Assistant to convert 340 Oracle objects including stored procedures using PL/SQL-specific constructs like BULK COLLECT and autonomous transactions. Our migration approach includes schema conversion with data type mapping, application connection string updates, query syntax conversion, and parallel operation of both systems during validation. We develop detailed test plans covering functional testing, performance validation, and data reconciliation ensuring migrated systems match source behavior. Migration complexity and timeline depend on database size, stored procedure logic complexity, and application coupling.

### How do you optimize SQL databases supporting Connecticut manufacturing ERP systems?

Manufacturing ERP optimization requires understanding how systems like SAP, Oracle NetSuite, or Microsoft Dynamics generate SQL queries and where custom indexes can improve performance without violating vendor support agreements. We analyze query patterns from ERP systems using Extended Events and Query Store, identifying opportunities for filtered indexes on commonly queried date ranges and status values. For a precision manufacturer in Bristol, we added 12 nonclustered indexes supporting their SAP Business One implementation, reducing common transaction times by 47% while maintaining full vendor supportability. Our ERP optimization work includes analyzing batch job performance, optimizing custom reports, and reviewing integration points where external systems query ERP databases. This connects with our broader [custom software development](/services/custom-software-development) experience building systems that integrate with ERP platforms.

### What SQL Server security controls do you implement for HIPAA compliance?

We implement comprehensive SQL Server security controls including transparent data encryption protecting data at rest, TLS certificate configuration encrypting data in transit, and audit specifications tracking all access to protected health information. For a Connecticut healthcare technology company, we configured Always Encrypted for columns containing social security numbers and dates of birth, implemented row-level security restricting access based on provider relationships, and enabled dynamic data masking for development environments. Our security implementations include regular vulnerability assessments using SQL Server's built-in scanner, service account privilege reviews following least-privilege principles, and backup encryption using certificate-based keys stored separately from database files. We provide compliance documentation mapping these technical controls to specific HIPAA Security Rule requirements (164.312), supporting your overall compliance program.

### How do you handle SQL Server database migrations with zero downtime requirements?

Zero-downtime migrations use approaches like log shipping, transactional replication, or Always On Availability Groups to maintain a synchronized secondary system during migration. For a 24/7 distribution operation in Windsor, we migrated their 780GB SQL Server 2014 instance to SQL Server 2019 using log shipping to maintain a hot standby. We restored a full backup to the new server, configured log shipping with 15-minute intervals, tested application connectivity against the secondary, and executed cutover during a 10-minute low-transaction window at 2 AM. Our migration plans include detailed rollback procedures, application connection string update coordination with your development team, and post-migration validation queries confirming data consistency. We rehearse the entire cutover process in non-production environments to validate timing and identify issues before affecting production operations.

### What monitoring do you implement for SQL Server instances serving Connecticut operations?

We implement comprehensive monitoring capturing wait statistics, query performance metrics, blocking chains, memory pressure indicators, and I/O subsystem performance using a combination of DMV queries, Extended Events, and SQL Server Agent alerts. For a Norwalk SaaS company, we configured monitoring tracking queries exceeding 3 seconds in the 95th percentile, blocking lasting more than 10 seconds, and log file growth events indicating transaction log management issues. Our monitoring implementations send alerts through email, SMS, or integration with platforms like PagerDuty, and include baseline performance metrics captured during normal operation. We configure custom dashboards showing key performance indicators specific to your workload, provide runbooks for common alert scenarios, and establish escalation procedures. Monitoring data retention balances historical analysis needs against storage costs, typically maintaining detailed metrics for 30-45 days.

### Can you optimize SQL queries in stored procedures without changing application code?

Yes, most query optimization occurs at the database layer through index additions, statistics updates, and stored procedure refactoring without requiring application changes. For a Hartford insurance company, we optimized 27 stored procedures supporting their claims system without modifying their .NET application code. Optimization techniques include adding nonclustered indexes supporting common WHERE clauses and JOIN conditions, eliminating scalar functions preventing index usage, replacing correlated subqueries with more efficient JOIN operations, and introducing appropriate query hints when the optimizer chooses suboptimal plans. We test all changes in non-production environments using production data volumes and workload patterns captured through Query Store. Some optimization opportunities require application changes—like reducing round-trips by passing table-valued parameters instead of individual scalar calls—which we identify and document even when implementation occurs during later development cycles.

### How do you optimize SQL Server tempdb for high-transaction Connecticut operations?

Tempdb optimization involves configuring appropriate file counts and sizes, placing files on fast storage, and enabling trace flags addressing allocation contention. For a Stamford trading platform experiencing PAGELATCH_UP waits during market hours, we increased tempdb data files to match their server's eight CPU cores, configured proportional autogrowth preventing size skew, and enabled trace flags 1117 and 1118 (default in SQL Server 2016+). We placed tempdb on dedicated NVMe storage separate from user databases, configured 8GB initial file sizes eliminating growth events during normal operation, and established monitoring alerting when tempdb exceeds 75% capacity. Our tempdb analysis includes identifying queries creating large temporary objects, reviewing index and statistics operations generating tempdb overhead, and evaluating memory grant configurations affecting tempdb spill behavior. These optimizations typically reduce tempdb-related waits by 60-80% on high-transaction systems.

### What's your approach to SQL Server disaster recovery planning for Connecticut companies?

We design disaster recovery strategies based on your specific recovery time objectives (RTO) and recovery point objectives (RPO), implementing appropriate technologies including database backups, log shipping, Always On Availability Groups, or replication to secondary sites. A regional bank needed four-hour RTO and 15-minute RPO for their core banking system. We implemented an Always On Availability Group with asynchronous commit to a disaster recovery site in Stamford, automated failover testing quarterly, and documented detailed runbooks covering failure scenarios. Our DR planning includes backup validation using RESTORE VERIFYONLY and periodic test restores, coordination with your infrastructure team on storage replication and network failover, and documentation of application-level dependencies. We review backup retention policies balancing compliance requirements against storage costs, typically recommending 30 days of daily backups, 12 months of weekly backups, and seven years of annual backups for financial and healthcare organizations.

---

## SQL Consulting Services for Connecticut's Insurance, Manufacturing, and Healthcare Industries

Connecticut's insurance sector generates over $47 billion in annual premiums, with Hartford hosting 65+ insurance company headquarters—each managing millions of policy records requiring optimized database performance. When a regional property casualty insurer struggled with 45-second claim query times across their 12-million-record SQL Server database, our performance tuning reduced that to 1.8 seconds while maintaining full ACID compliance. We've delivered SQL consulting to Connecticut organizations for over two decades, focusing on sectors where data accuracy and query speed directly impact business operations and regulatory compliance.

The manufacturing corridor spanning from Bridgeport through New Haven relies on real-time production data to maintain just-in-time inventory systems and coordinate supply chains across multiple facilities. A precision aerospace components manufacturer in East Hartford needed to consolidate data from seven SQL databases across three production facilities into a single source of truth for their quality control systems. Our database architects designed a replication topology using SQL Server Always On Availability Groups that maintained sub-second latency while ensuring zero data loss during network interruptions—critical for FAA compliance documentation.

Connecticut's healthcare providers face the dual challenge of HIPAA compliance and high-volume patient data access across emergency departments, specialist networks, and imaging centers. A regional health system operating facilities in New Haven, Hartford, and Stamford contacted [contact us](/contact) after their patient lookup system degraded to 8-second response times during peak hours. We identified 23 missing indexes, rebuilt fragmented tables holding 4.2 million patient records, and implemented columnstore indexes for their reporting queries. Emergency department staff now access patient histories in under 0.9 seconds, even during morning shift changes when system load peaks.

Financial services firms in Stamford and Greenwich manage high-frequency trading systems and wealth management platforms where millisecond delays translate to measurable revenue impact. When a boutique wealth management firm needed to migrate 15 years of client portfolio data from Oracle to SQL Server without disrupting daily trading operations, we executed a phased migration strategy that moved 8.3 million transaction records with zero downtime. Our approach included parallel running both systems for three weeks to verify data integrity before cutover, using SQL Server Integration Services packages that we stress-tested against production load patterns.

The state's growing bioscience sector—including companies in New Haven's biotech corridor and Farmington's pharmaceutical research facilities—generates massive volumes of clinical trial data requiring complex analytical queries. A clinical research organization needed to optimize their SQL database supporting multi-site Phase III trials involving 12,000+ participants across 47 study sites. We restructured their database schema to support temporal queries for protocol amendments, implemented row-level security for multi-tenant data isolation, and reduced their monthly analytics processing window from 72 hours to 11 hours. You can see similar optimization work in [our case studies](/case-studies) documenting measurable performance improvements.

Connecticut manufacturers using ERP systems like SAP, Oracle NetSuite, or Microsoft Dynamics face integration challenges when connecting SQL databases to production equipment, quality management systems, and shipping platforms. A specialty metals manufacturer in Waterbury needed bidirectional synchronization between their SQL Server inventory database and their powder coating line's PLC systems. We built a custom integration using SQL Server Service Broker that processes inventory transactions in real-time while maintaining referential integrity across three normalized tables. This approach mirrors the architecture we documented in our [QuickBooks Bi-Directional Sync](/case-studies/lakeshore-quickbooks) case study, adapted for industrial IoT requirements.

The state's educational institutions—from Yale's research databases to UConn's student information systems—manage complex relational data requiring careful attention to query optimization and backup strategies. A private university in West Hartford struggled with degree audit queries that timed out during registration periods when 8,000+ students simultaneously accessed course planning tools. Our database performance analysis revealed that their recursive CTE queries for prerequisite checking were generating 47 million logical reads per execution. We redesigned the query logic using hierarchyid data types and indexed views, reducing execution time from 38 seconds to 1.2 seconds while supporting concurrent access for the entire student body.

Connecticut's strong logistics presence—including major distribution centers in Windsor and Wallingford serving Northeast markets—depends on warehouse management systems with real-time inventory accuracy. A third-party logistics provider managing 2.3 million square feet of warehouse space needed to improve their SQL database supporting barcode scanning, pick-path optimization, and carrier integration. We implemented table partitioning on their 45-million-row shipment history table, optimized their nightly ETL processes to complete within a 4-hour maintenance window, and added filtered indexes that reduced common query costs by 83%. Similar fleet and logistics optimization work is detailed in our [Real-Time Fleet Management Platform](/case-studies/great-lakes-fleet) case study.

Our [sql consulting expertise](/services/sql-consulting) extends beyond performance tuning to include database security hardening, disaster recovery planning, and compliance documentation for Connecticut organizations subject to state and federal regulations. When a healthcare technology company needed to document their SQL Server security controls for a SOC 2 Type II audit, we reviewed their implementation of transparent data encryption, dynamic data masking, and audit logging configurations. We identified three gaps in their row-level security implementation and provided remediation scripts along with compliance documentation that satisfied their auditor's technical requirements. These security implementations protect both the database layer and application access patterns documented in our broader [custom software development](/services/custom-software-development) engagements.

The state's insurance industry faces unique database challenges related to policy rating engines that execute complex actuarial calculations across millions of risk factors. A commercial lines insurer needed to optimize their SQL stored procedures that calculate premiums based on 200+ rating variables including property characteristics, loss history, and geographic risk factors. We refactored their rating logic to eliminate scalar functions causing implicit conversions, introduced indexed computed columns for frequently accessed calculations, and implemented query hints that ensured optimal join strategies. Premium quote generation dropped from 6.3 seconds to 0.8 seconds—a critical improvement when agents compare multiple carrier options during sales calls.

Connecticut manufacturers integrating quality management systems with production databases require careful attention to data validation, audit trails, and statistical process control calculations. A medical device manufacturer in Bloomfield needed to track 47 quality checkpoints across their injection molding process, storing measurement data that fed directly into their ISO 13485 compliance reporting. We designed a database schema supporting multivariate analysis of process parameters, implemented temporal tables to maintain complete change history for FDA audits, and created indexed views that pre-aggregated control chart calculations. The resulting system processes 280,000 quality measurements daily while supporting real-time SPC dashboards with sub-second refresh rates.

Financial institutions throughout Fairfield County depend on SQL databases for regulatory reporting including call reports, suspicious activity monitoring, and capital adequacy calculations. A regional bank needed to optimize their SQL Server instance supporting 150+ regulatory reports generated monthly from a 340-table database containing 15 years of transaction history. We analyzed their reporting workload using Extended Events and Query Store data, identifying 18 frequently executed queries accounting for 67% of total CPU time. Our optimization work included adding filtered indexes for date-range queries, implementing columnstore indexes for aggregation-heavy reports, and redesigning their ETL process to leverage change data capture instead of full table scans.

---

**Canonical URL**: https://freedomdev.com/services/sql-consulting/connecticut

_Last updated: 2026-05-14_