Server hosting — commonly known as colocation — means placing your own physical server in a professional data center and renting infrastructure layers like power, cooling, networking and physical security from the provider. The hardware is yours; most of the operational responsibility belongs to the provider. As an enterprise or high-traffic site owner, this guide brings together everything you need to know to pick the right data center and to correctly understand cabinet/U count, kVA power, bandwidth, SLA, hands fees and certification line items — with real values, command examples and configuration snippets.

If you want to step one back from where you are reading, the decision around server hosting typically starts with three questions: (1) do you really need your own hardware or is renting a dedicated server enough, (2) should you be located in Turkey or in hubs like Frankfurt/Amsterdam, (3) does your budget run on a monthly basis or annually prepaid. This article aims to answer all three.

Related guides: What is hosting, types and pricing · VPS vs VDS difference · Linux server administration basics · VPS security hardening · Multi-layer DDoS protection · Server monitoring with Prometheus and Grafana

What Is Server Hosting and How Does It Differ From Hosting?

Web hosting is a service where you rent space on a shared or VPS server; you don't even know the underlying hardware. Server hosting is the exact opposite: you buy a 1U, 2U or 4U server (Dell PowerEdge, HPE ProLiant, Supermicro, Lenovo ThinkSystem), place it in a rack cabinet at a data center, and rent only the physical accommodation and network connectivity from the provider. CPU, RAM, disks, the RAID card and even the switch are yours.

The hard line between them is this: in shared hosting all hardware belongs to the provider and the responsibility is roughly 100% the provider's; with a dedicated server rental the hardware is still the provider's but management is largely on you; with colocation the hardware is yours and the provider only delivers power, cooling, network and physical security. The cost curve breaks accordingly. In practice, if your engineering team needs more than 50 physical CPU cores and 512 GB of RAM, or requires special hardware (NVIDIA H100/A100 GPUs, FPGAs, custom disk arrays), server hosting starts to make sense. When you calculate three-year TCO, you'll routinely see that the same performance costs 2–4x more on cloud.

  • Shared hosting — around $2-15 USD/month, zero management, limited resources, shared IP.
  • VPS — around $7-70 USD/month, virtual server on a hypervisor, root access, flexible scaling.
  • Dedicated rental — around $130-800 USD/month, physical server owned by the provider, used by you.
  • Colocation (server hosting) — around $80-500 USD/U/month, hardware is yours, infrastructure is rented, longest-lived solution.
  • Your own data center — seven-figure investment; the only meaningful scenario is banking/defense.

Data Center Tier Classification: I, II, III, IV

A data center's reliability level is measured by the Uptime Institute Tier certification. As the Tier number rises, redundancy — and therefore expected uptime — increases. In Turkey, the vast majority of large operators and independent providers are certified at Tier III; facilities such as Türk Telekom Esenyurt and DorukNet Çorlu also have Tier IV zones.

  • Tier I: single power and cooling path, no redundancy, expected annual downtime 28.8 hours (uptime 99.671%).
  • Tier II: redundant power/cooling components but a single distribution path, annual downtime 22 hours (99.741%).
  • Tier III: multiple distribution paths, no downtime during planned maintenance, 1.6 hours/year (99.982%). The most common enterprise standard.
  • Tier IV: 2N+1, fault-tolerant, concurrently maintainable + fault-tolerant, 26 minutes/year (99.995%). Banking, defense, hospitals.
  • LEED Gold/Silver: green-building certification — not an alternative to Tier, an additional line item.

The Tier level binds directly to the SLA percentage in your contract. A Tier III facility commits to 99.98% uptime, which corresponds to roughly 1 hour 45 minutes of downtime per year. At Tier IV the limit is 26 minutes/year. In the contract text, always read whether the definition of "availability" covers both power and network or only power — the difference is significant.

The Data Center Market in Turkey: Locations

Large-scale data centers in Turkey are mostly located in Istanbul, Ankara, Çorlu and Antalya. Türk Telekom operates Esenyurt (Tier III), Gayrettepe and Ankara Ümitköy halls. Turkcell runs Tier III sites in Gebze and Ankara. KoçSistem has more than 20 years of operational experience across three facilities in Istanbul and Ankara. DorukNet serves Istanbul, Çorlu, Ankara and Antalya; the Çorlu site is LEED Gold certified.

On the independent provider side, Netinternet, Vargonen, Radore, Veridyen, Bulutistan, Atlantis Telekom, İşNet, Asır DX, DGN Teknoloji, Netin A.Ş. and Equinix IS-1/IS-2 come up frequently. Equinix Istanbul (IS-1, Şişli; IS-2, Ataşehir) stands out especially for international peering needs. When choosing a location, weigh five criteria: network latency (RTT) to your target audience, the existence of two independent fiber paths, whether the facility is on the Anatolian or European side of Istanbul (earthquake fault line and redundancy), access to the TNAP peering ecosystem, and the hourly tariff for hands-and-eyes service.

Cabinet, U Count and Power Budget

A standard data center cabinet is 42U tall; some facilities offer 47U or 48U. 1U = 44.45 mm (1.75 inches). Cabinet depth is typically between 1000–1200 mm and width is 600 or 800 mm. Nominal power per cabinet ranges from 4–8 kW; in high-density (HPC, GPU) facilities it can climb to 15–30 kW per cabinet.

  • Half cabinet (20U): ideal for 4–8 servers, typical $130-230 USD/month + electricity.
  • Full cabinet (42U): 12–25 servers, $230-600 USD/month + kWh.
  • Cage: physically separated area of 3–10 cabinets, for banking and regulated sectors.
  • Suite / dedicated room: 50+ cabinets, dedicated entrance and HVAC zone, for financial and telecom players.

Power calculations are usually based on 200–600 W per server. A 1U HP DL360 Gen10 averages 250 W, a 2U Dell R740 350 W, and a 2U GPU server (4×A100) can draw 1.6–2.5 kW. When filling a cabinet, pay attention to the distinction between design power and committed power: design might be 6 kW, but if your contract commits 3 kW you cannot exceed that. Take cabinet power budget through two independent PDUs (Power Distribution Units) — labelled A and B feeds. Your servers must have dual power supplies (redundant PSUs) and each PSU must be plugged into a different feed; if a single PDU, UPS line or generator drops, your service stays up.

Power, UPS and Generator Architecture

In a typical Tier III facility the power architecture looks like: utility grid → main transformer → ATS (Automatic Transfer Switch) → UPS → PDU → cabinet. UPS systems are usually deployed in N+1 redundancy; even if a module drops, the load is carried. Generators kick in for outages longer than the UPS battery's runtime (15–30 minutes).

  • Grid transfer: ATS hands off to the generator within 8–15 seconds; the UPS bridges that interval.
  • UPS topology: prefer double-conversion (online); line-interactive is cheap but inadequate for enterprise.
  • Generator fuel: a 48–72 hour tank is standard; ask about a fuel contract for 24/7 refueling.
  • Annual testing: the facility should document transfer tests in its site plan; demand at least 2 tests per year in the contract.
  • PUE (Power Usage Effectiveness): 1.4–1.8 is reasonable, 1.2 and below is excellent.

Electricity bills come in one of two models: flat / commit (a fixed monthly fee per kW, e.g., around $40-60 USD per kW/month) or kWh-based consumption (real consumption multiplied by EPİAŞ PTF + distribution + commission). With high occupancy and high load, the kWh model wins; with low occupancy and unpredictable load, the commit model is more advantageous.

Cooling: CRAH, In-Row, Hot/Cold Aisle

In modern data centers the standard cooling topology is hot aisle / cold aisle containment: cabinet fronts intake cold air on one side and exhaust hot air on the other; the two aisles are separated by physical panels or doors. CRAH (Computer Room Air Handler) units push cold air under the floor, and perforated tiles direct that air to the front of the cabinet. In high-density (15+ kW) cabinets, in-row cooling is used: AC units placed between cabinets capture hot air directly at the source. GPU/HPC cabinets prefer rear door heat exchangers (RDHx) or direct liquid cooling. Ask about kBTU/h or kW thermal capacity per cabinet in the contract.

  • Cold aisle temperature: ASHRAE A1 class 18–27°C; most facilities hold 20–24°C.
  • Relative humidity: 40–60% target; very low humidity creates static-electricity risk.
  • Delta-T: the temperature difference between hot and cold aisles; 10–15°C is ideal.
  • Free cooling: directly filter and use outside air when below 18°C; lowers PUE; in Turkey, Çorlu/Ankara have an advantage.
  • FM-200 / NOVEC 1230: gaseous fire suppression — does not damage hardware; water sprinklers are not preferred.

Network Architecture: BGP, Multihoming and Cross Connect

The most critical technical line item in your colocation contract is the network. Buying a 1 Gbps port from a single provider looks cheap, but it means a single link, a single upstream provider and a single transit point — when something breaks in the provider's core, your service drops. The right answer is BGP multihoming. For BGP multihoming you need: an ASN (Autonomous System Number — obtained from RIPE NCC, around €50/year), your own provider-independent IPv4 prefix (smallest is /24, $35–50 USD/IP on the secondary market), an IPv6 prefix (RIPE allocates /48 for free), and at least two transit/IX uplinks.

Verify that your two upstreams come from different PoPs in the same data center and that their fiber paths (the conduit they enter and exit through) are separate. Otherwise a single backhoe incident takes both down at once. RFC 4271 is the official spec for BGP-4, and RFC 5798 for VRRP v3.

What Is a Cross Connect, and Why Does It Matter?

A cross connect is a physical cable (typically single-mode fiber LC/LC) pulled between two customers/cabinets in the same data center. Traffic does not traverse the public internet; it travels through the facility's optical distribution, with sub-100 µs latency. In large facilities like Equinix, cross connects to Microsoft Azure ExpressRoute, AWS Direct Connect and Google Cloud Interconnect are sold here. Monthly cross connect prices range from $200 to $1,500 USD depending on the facility. If you're building a hybrid cloud architecture (e.g., database in colocation, application on Azure), cross connect can bring latency down to 1–3 ms; otherwise you'd see 25–60 ms over the public internet. For Core Web Vitals targets, this difference is decisive.

Bandwidth Models

Bandwidth in colocation contracts usually comes in one of three models. Which one to pick depends on your traffic profile:

  • Flat-rate / unlimited: 1 Gbps or 10 Gbps port, unlimited traffic. Economical for high, sustained traffic. 1 Gbps unmetered in Istanbul falls in the $400–800 USD/month range.
  • Burstable / 95th percentile: 5-minute samples are taken, the top 5% of the month is dropped, and the 95th percentile of the rest is billed. Tolerant to spikes. Around $3–5 USD/Mbps/month.
  • Volume-based / GB: X TB included per month, then a per-TB fee for overage. Suited to file servers with irregular traffic.
  • Port speed + commit: 10 Gbps port + 1 Gbps commit; overage at burst price. Hybrid model.
  • Free outbound to peers: some facilities count IXP-internal peering traffic as free; attractive for CDN and OTT.

Without measuring your own traffic over time with vnstat or iftop, you can't say which model fits. If you sit behind a CDN (Cloudflare, Bunny), 50–200 Mbps to your origin is usually enough; if you're doing live streaming, 5–10 Gbps may be required.

IP Addresses, ASN and PI Prefix

IPv4 addresses have been exhausted at RIPE since 2019; RIPE NCC only allocates a single /24 (256 IPs) to each member, and you wait in the queue. Buying a /24 on the secondary market without membership averages around $9,000–13,000 USD. Renting a PA (provider-aggregatable) block from the provider runs roughly $0.05–0.20 USD per IP per month.

  • PA (Provider Aggregatable): allocation from the provider's block; not portable to another facility. Reasonable for small deployments.
  • PI (Provider Independent): directly from RIPE, the block you announce with your ASN. Mandatory for BGP multihoming.
  • IPv6: /48 free for RIPE members, unlimited address space; in 2026 it is no longer optional but mandatory.
  • RPKI: ROA-signed route announcements, the basic defense against BGP hijacks; absolutely ask your facility provider to set this up.
  • rDNS / PTR: if you intend to relay SMTP, the PTR record is critical; if the provider does not delegate PTR, do not send mail from that facility.

Physical Security and Access Control

Entering a good Tier III facility passes through at least these layers: vehicle barrier → reception ID check → mantrap (single-person passage) → biometric (fingerprint/iris) + magnetic card → additional lock at the cabinet. CCTV recordings are retained for at least 90 days. ISO 27001, ISO 22301 (business continuity), ISO 9001, PCI-DSS and ISAE 3402 certifications have become standard at large facilities. If you operate in financial services or process special-category personal data (health, biometrics) under data-protection law, ask about ISO 27701 (privacy extension), SOC 2 Type II and the relevant compliance documents (BDDK in Turkey, KVKK). If you're choosing an overseas facility, data-controller responsibility under GDPR must be documented and Standard Contractual Clauses signed.

  • Access list: only individuals named in the contract with photo ID on file.
  • Visitor escort: visitors are always accompanied by facility personnel; never left alone in the hall.
  • Cabinet lock: combination lock + key; some facilities offer electronic locks + audit logs.
  • Tamper-evident seal: a sticker on a PSU or disk caddy that breaks when opened.
  • Equipment removal: every device entry/exit is documented with a GRF (Gear Receipt Form); no form = no hardware out.

Out-of-Band Management: IPMI, iDRAC, iLO, KVM-IP

Once your server is in colocation you cannot go there physically — remote management is vital. All enterprise servers ship with a BMC (Baseboard Management Controller): iLO on HPE, iDRAC on Dell, IPMI on Supermicro, XCC on Lenovo. This chip runs independently of the main CPU/OS; it can power a server on, enter the BIOS, capture the screen, mount a virtual CD and reinstall the OS. Never expose BMC interfaces to the public internet. Countless servers have been compromised through iLO 4 (CVE-2017-12542) and Supermicro IPMI auth-bypass vulnerabilities. Keep out-of-band management on a separate VLAN, behind a VPN, with BMC firmware up to date and default passwords changed on day one.

Hands and Eyes (Remote Hands) Service

Your server is in Istanbul; you're in Izmir — a disk failed and a cable needs to be reseated. Hands and eyes service steps in here: a technician at the facility performs the physical intervention under your direction. Typical pricing is $25–80 USD per hour, billed in 30-minute minimums. Hands fees add up quickly; if you'll need 24/7 coverage, sign a monthly retainer contract (e.g., 4 hours/month included). Run your disks in RAID 6 or RAID 10 so a single disk failure doesn't trigger a midnight scramble — for the RAID layer, see our backup strategies guide.

  • Smart hands: disk swap, cable recheck, button reset — simple, 80% of scenarios.
  • Skilled hands: switch configuration, OS reload, diagnostics — more expensive, more skilled.
  • Spare parts on-site: stocking and labeling spare disks/PSUs/RAM at the facility — essential for critical workloads.
  • SLA-bound hands: response within 4 hours, 8 hours, or NBD (Next Business Day) options.
  • Photo/video request: most facilities email a cabinet photo within 30 minutes; free.

Hardware Selection: 1U, 2U or GPU?

There are three dimensions when choosing the hardware you'll move into the data center: density, cooling, supply. A 1U server (Dell R650, HPE DL360 Gen11) gives maximum density — typically 32 cores, 1 TB RAM, 8 NVMe slots. A 2U server (R750, DL380) brings more disks (24×2.5" or 12×3.5"), 4-GPU support and better cooling. A 4U / GPU server (Supermicro 4124GS-TNR, Dell XE9680) is for HPC and AI workloads. For GPU-heavy workloads, watch the thermal envelope: 4×NVIDIA H100 SXM5 + 2×Xeon will pull 6.5 kW; if your standard cabinet supports 6 kW, that single server fills the cabinet.

Bare-Metal Operating System Installation

Once the server sits in the rack, the first job is OS installation. You can mount an ISO via iDRAC/iLO virtual media, but at scale PXE boot + kickstart/preseed is preferred. Mounting a separate ISO on each of 50 servers in a batch is a time killer. Modern alternatives: Tinkerbell, MAAS (Canonical Metal-as-a-Service), FAI (Fully Automatic Installation). With mature automation, the moment a new server enters the rack and is plugged in, it gets an IP, the OS installs itself, Ansible configures it, and it registers with Prometheus. Our Ansible server automation guide covers this flow from scratch.

Network Hardening: Firewall, VLAN, Jump Host

Opening SSH directly to your colocation server's public IP is the most common mistake. The right topology: all servers on a private VLAN, access through a single internet-facing jump host / bastion, the jump host accepting only key-based SSH + 2FA and restricted by IP allow-list. Fail2ban is the basic defense against SSH brute force; better still is key-based authentication + threat-intelligence sharing via CrowdSec. Every step in our VPS security hardening guide applies 100% on bare-metal too.

DDoS Protection: Provider Capabilities and Limits

DDoS volumes have exploded over the years: in 2024 Cloudflare reported a 5.6 Tbps attack. It's not enough for your provider to say "DDoS protected"; ask about volumetric capacity (how many Gbps it absorbs), whether L7 (HTTP flood) protection exists, and the location of the scrubbing center.

  • Volumetric (L3/L4): SYN flood, UDP amp, NTP/DNS reflection. Capacity is measured in Gbps/Mpps.
  • Protocol (L4): connection-state exhaustion, SSL/TLS resource exhaustion. Sits behind a stateful firewall.
  • Application (L7): HTTP flood, slowloris, RUDY. Requires a WAF and rate limiting.
  • Always-on vs on-demand: always-on routes all traffic through scrubbing and adds latency; on-demand activates when triggered.
  • BGP redirect: when an attack is detected, the /24 announcement is shifted to a scrubbing facility, and clean traffic returns to origin via a GRE tunnel.

Recommended architecture: facility provider's baseline DDoS protection (5–20 Gbps) + an anycast scrubber like Cloudflare/Imperva in front + Nginx rate limiting at the origin — layered defense. For details, see our multi-layer DDoS protection guide.

Monitoring: Prometheus, Grafana, Loki, Alertmanager

If you're not monitoring your servers, you're not technically running them. The modern stack settles on Prometheus + Grafana + Alertmanager + Loki. Install node_exporter on each server, ipmi_exporter for the BMC, and snmp_exporter for the network. Our Prometheus and Grafana for server monitoring guide covers alert rules, dashboard JSON examples, and Loki log aggregation in detail; for SLA accounting, track the availability of the up{job="node"} series.

SLA: What to Look for in the Contract

The SLA (Service Level Agreement) is the most heavily marketed and least carefully read document in colocation. A line item of 99.9% means tolerating 43.8 minutes of downtime per month; 99.99% allows only 4.4 minutes. The fine print determines a lot. Watch the service credit cap: if you pay $400 USD/month, a 6-hour outage might trigger a 10% credit — i.e., $40 USD. Yet those 6 hours could have cost your e-commerce site $4,000 USD in lost sales. The real takeaway: never depend on a single facility; build active/passive or active/active multi-DC architecture.

  • Uptime definition: power only, network + power, customer-side switch included?
  • Excluded events: planned maintenance, force majeure and customer hardware failure are typically excluded.
  • Service credit: when the SLA is breached, what percentage of the monthly fee is credited (usually in the 5–50% range, capped at the monthly fee).
  • Response vs resolution: responding to a network incident in 15 minutes is different from resolving it.
  • P1/P2/P3 classification: for critical outages, an hourly intervention commitment must exist.

Pricing: Approximate Ranges for 2026

All numbers are approximate averages for early 2026; they can swing 30% based on provider, location, contract length (1/3/5 years) and exchange rate. Use them as a reference when negotiating.

  • 1U server colocation: $65-150 USD/month + electricity (~1A/100W included)
  • Half cabinet (20U): $165-300 USD/month + 3–5 kW commit (~$115-200 USD extra)
  • Full cabinet (42U): $280-600 USD/month + 5–10 kW commit
  • Additional 1 kVA power: $40-60 USD/month (commit), or kWh model at EPİAŞ × ~1.4
  • 1 Gbps unmetered port: $370-800 USD/month
  • 10 Gbps port + 1 Gbps commit: $600-1,170 USD/month
  • 1 IPv4 (rented from provider): $3-6 USD/month
  • Cross connect: $130-600 USD/month
  • Hands and eyes: $25-80 USD/hour (30 min minimum)
  • Spare disk storage: $5-13 USD/disk/month

Annual prepayment commonly attracts a 10–15% discount, and a three-year commitment 20–25%. New data center openings sometimes offer an anchor tenant discount (50% off for the first 6 months). When negotiating, look beyond list price at setup fees, per-IP fees and cross-connect line items — that's where the hidden costs hide.

Build vs Buy: Cloud or Colocation?

Hybrid is the most accurate answer, but the principle is this: if your traffic is variable, you're in R&D, or you serve a global geography, cloud is more reasonable. If traffic is steady, the lifecycle is longer than three years, and you need GPU/high I/O, colocation is cheaper. Backblaze's 2024 "Drive Cost Comparison" article showed colocation could save up to 80% versus AWS over 7 years.

  • Cloud upside: minute-level provisioning, region variety, managed services (RDS, EKS), pay-as-you-go.
  • Cloud downside: egress traffic bills, expensive high-IOPS, GPU instance waiting lists, 3-5x cost on predictable load.
  • Colocation upside: fixed monthly cost, your own hardware, NVMe/GPU freedom, cheap long-term.
  • Colocation downside: capex, capacity planning, hands fees, weak answer to distributed geography.
  • Hybrid: critical database in colocation, edge/CDN on cloud; bridged via cross connect.

Geographic Distribution and Disaster Recovery

Locking yourself into a single facility is the biggest risk. To borrow the wording of RFC 8174: not SHOULD but MUST — you must have a backup at a second location. The ideal setup is NOT Istanbul European side + Istanbul Anatolian side, but asynchronous replication between different cities (e.g., Istanbul ↔ Ankara, or Istanbul ↔ Frankfurt). On the database side, use PostgreSQL streaming replication, MySQL group replication, Redis Sentinel, MongoDB replica sets. Asynchronous replication is cheap but carries data-loss risk; synchronous gives zero loss but adds 5+ ms latency. Our database backup strategies and PostgreSQL performance articles go deeper into this.

  • Active/Passive: primary site live, secondary in hot/warm standby; failover within minutes.
  • Active/Active: both sites answer simultaneously over an anycast IP; double cost, zero RTO.
  • RPO (Recovery Point Objective): acceptable data loss (e.g., 5 min).
  • RTO (Recovery Time Objective): acceptable downtime (e.g., 30 min).
  • Drills: at least 2 DR-plan drills per year; a paper-only plan does no work.

Redundancy and the 3-2-1-1-0 Rule

The new standard in the backup world is 3-2-1-1-0: 3 copies, 2 different media, 1 off-site, 1 offline/immutable, 0 errors on a restore test. To apply this rule in colocation, one copy of your backups must live outside the facility (e.g., Backblaze B2, Wasabi, AWS S3 Glacier, or a MinIO cluster in another DC).

Compliance: KVKK, GDPR, PCI-DSS, ISO 27001

If you process personal data, you fall under KVKK (Turkey) and, if your customers are in the EU, under GDPR. KVKK Article 9 restricts overseas data transfers; if data leaves the country, explicit consent, BCRs or an adequacy decision is required. That makes Turkish locality regulatorily critical. If you have banking/finance customers, BDDK adds further requirements (mandatory local data center, external audit). Recommended further reading: OWASP Top 10 2026 for the application layer, OAuth 2.0 / OIDC for identity, JWT security pitfalls for the token side.

  • ISO 27001: information security management — the minimum bar for the facility.
  • ISO 22301: business continuity — for DR maturity.
  • ISO 27017/27018: cloud security and PII protection.
  • PCI-DSS: if you accept card payments, audits are mandatory at L1-L4 levels; the facility must be PCI-DSS certified.
  • SOC 2 Type II: a compliance report your SaaS customers will frequently ask for.
  • HIPAA: if you have US healthcare customers, a BAA must be signed.
  • TS 13298: Turkish e-correspondence standard — for those serving the public sector.

Monitoring SLA, Alert Flow, On-Call

If you serve 24/7 you must run an on-call rotation. PagerDuty, Opsgenie, Better Stack, or open-source Grafana OnCall + Karma are all viable. Keep alert taxonomy simple: P1 (live outage), P2 (degraded), P3 (capacity warning), P4 (cosmetic). Anyone paged at midnight gets expensive coffee — unnecessary alerts breed fatigue.

Migration: Moving From Cloud to Colocation

On the supply side, the time between ordering a server and racking it is longer than most people guess. Supply chains normalized through 2024–2025, but high-end CPUs (Sapphire Rapids 6526Y) and especially GPUs (H100/H200) are still backlogged; standard 1U/2U servers carry a 2–4 week lead time in Turkey, while GPU servers run 8–24 weeks. If you have an annual AWS spend equivalent to roughly $60-100K USD, moving to colocation shortens your ROI cycle. The order: (1) detailed cost analysis (cloud bill side-by-side with colo TCO), (2) hardware sizing (using real CPU/RAM/IOPS p95 values), (3) network architecture (cross connect bridging back to cloud), (4) staged cutover (dev first, then batch jobs, then read-only, then critical last). On the application side, if IaC is already managed with Terraform or Ansible, migration moves much faster; container workloads built on Kubernetes run on the same manifests in a colocation cluster.

Three Real-World Scenarios

1) Mid-sized e-commerce

15 million monthly page views, 200K active users, 25 GB product database. Setup: 2× 1U web servers (Nginx + PHP-FPM), 1× 2U primary DB (PostgreSQL), 1× 2U replica DB, 1× 1U Redis + queue. Total 8U, 1.8 kW, 1 Gbps port. Cloudflare in front, origin only sees scrubbed traffic. Monthly cost in Istanbul falls in roughly the $600-830 USD range (server amortization included).

2) B2B SaaS platform

500 enterprise customers, multi-tenant, 50 TB of data flow per month, 99.95% SLA commitment. Setup: 6× 2U Kubernetes workers, 3× 1U etcd/control-plane, 4× 2U storage (Ceph cluster), 2× 1U load balancers (HAProxy + keepalived). Two facilities (Istanbul + Ankara), active/passive PostgreSQL streaming replication. Cross connect integration to Azure Cognitive Services. Around $2,500-3,700 USD/month.

3) Gaming company (FPS server)

Low latency is critical (<30 ms across Turkey). Setup: 4× 1U high-frequency CPU (Intel Xeon Gold 6444Y, 4.0 GHz turbo), DDR5-5600, NVMe storage. A single facility in Istanbul plus a direct route to Türk Telekom/Turkcell/Vodafone customers via free TNAP peering. Anti-DDoS scrubbing always-on. Around $730-1,170 USD/month + DDoS protection. In these examples, you can scale server counts, cabinet density and budgets by 1.5–2x to fit your own load.

Common Mistakes and Pitfalls

  • Single PSU / single feed: pulling one cable to save money = single point of failure.
  • Default IPMI password: a BMC left exposed to the internet with "ADMIN/ADMIN" = compromised server.
  • Cable management neglect: spaghetti behind the cabinet = blocked hot-air circulation + nightmare for hands work.
  • No labeling: which of those 24 servers is prod-db-01? Every device, cable and port must be labeled so the hands tech doesn't pull the wrong disk.
  • No capacity planning: 6 months in the cabinet is full, and a physical move is both expensive and risky.
  • Keeping backups in the same facility: a fire/flood takes both prod and backup — off-site is mandatory.
  • Single upstream ISP: an ISP transit problem flattens your entire service.
  • Not reading the SLA: it says "99.99% uptime" but the service credit cap is 5% → the loss stays with you.
  • Hands abuse: making midnight hands calls a habit for trivial things multiplies the bill.
  • Compliance forgetfulness: failing to produce facility certifications during a KVKK audit triggers serious fines.

The data center market is shifting. AI/GPU-dense facilities (40+ kW per cabinet) are being built, and liquid cooling is becoming standard. ARM-based Ampere/Graviton servers have surpassed x86 in performance per watt. Edge data centers (micro facilities) are spreading from Istanbul into Bursa/Izmir/Antalya — pulling connection latency down.

  • Green energy: PUE under 1.2, hydro- or nuclear-sourced facilities — affecting carbon reporting.
  • Liquid immersion cooling: 60+ kW cabinet density becomes feasible.
  • Confidential computing: memory encryption with Intel TDX, AMD SEV-SNP — for finance/healthcare.
  • Disaggregated infrastructure: separating RAM/storage from servers and pooling them via CXL.
  • Sovereign cloud: local cloud investments in Turkey (Türk Telekom, Turkcell, Vodafone) are growing — colocation + sovereign cloud hybrids are spreading.

Decision Matrix: Which Choice Is Right for You?

  • Monthly traffic < 1 TB, < 100K users → VPS is enough.
  • 1–10 TB/month, 100K–1M users, dynamic content → VPS cluster or dedicated.
  • 10+ TB/month, sustained load, control required → colocation 1U-4U.
  • Special hardware (GPU, storage array, FPGA) → colocation cabinet.
  • Compliance (PCI-DSS, BDDK, heavy KVKK data) → colocation or local cloud.
  • Geographic distribution, minute-level provisioning → cloud primary, colocation core.
  • Lifecycle longer than three years, steady load → colocation TCO is most advantageous.

References and Further Reading

Professional consulting for your server hosting needs

For the right data center selection, cabinet/power sizing, BGP multihoming setup, IPMI and the monitoring stack, DDoS protection and DR planning, talk to our expert team contact us

WhatsApp