Technology evolves rapidly, and there’s always something new on the horizon—whether it’s the latest programming framework, cloud service, or infrastructure upgrade. While adopting modern solutions can be beneficial, the reality is that newer doesn’t always mean better for every situation.
Take Ferrari, for example. It’s one of the best cars in the world, but is it the right choice for a school run? Probably not. A more practical car would get the job done with less hassle, cost, and maintenance. The same logic applies to technology: just because something is new and advanced doesn’t mean you need it.
I have often experienced scenarios where using the latest technology might not be the best choice. A lot of companies subscribe to a lot of technology that they don’t need and waste valuable money.
Overengineering with the Latest Programming Techniques
Imagine you’re building a simple to-do list application. You could use the latest programming frameworks, microservices architecture, and AI-powered features to predict tasks. But is all that necessary? For a basic to-do app, a straightforward monolithic architecture with a simple front-end and back-end would suffice. Overengineering with advanced techniques not only increases development time but also adds unnecessary complexity and cost.
If you’re building a simple company website or blog, plain HTML, CSS, and JavaScript (or even a CMS like WordPress) can be more efficient. Using heavy frameworks adds complexity, increases load time, and requires ongoing maintenance for security updates.
I believe in giving time to the new technology to mature before I start implementing it in projects. When KnockoutJS came to the market with a lot of hype, we used it in our projects. Within a few years, the popularity of the KnockoutJS went down, and I found it challenging to find the experts to support those projects.
Cloud Services: When Simplicity Wins
Cloud computing has revolutionized how we build and deploy applications, but not every project requires the latest cloud services. For instance, if you’re running a small blog or a personal portfolio website, you don’t need a high-availability, multi-region cloud setup with auto-scaling and serverless functions. A basic shared hosting plan or a simple virtual private server (VPS) would work just fine.
Serverless computing (like AWS Lambda, Azure Functions) is a great innovation, but is it necessary for every application? If you have a small internal tool that runs reliably on a virtual machine or traditional hosting, switching to serverless can introduce unnecessary costs and complexity. Serverless makes sense for highly variable workloads, but if your system has steady traffic, traditional hosting might be more cost-effective. A simple VM-based or containerized deployment could work just as well. And you can host many tools in one VM.
Kubernetes is the gold standard for container orchestration, but do you really need it for a small business application with just a few users? If you use Kubernetes for small apps, you will require DevOps expertise and additional costs for managing nodes and networking, which is unnecessary complexity when a more straightforward solution would suffice. If your application runs on a single server or has a few containers, a managed container service like AWS ECS or Docker Compose could be simpler and more cost-effective.
Some businesses rush into a multi-cloud strategy, believing it provides better reliability and cost savings. However, managing multiple cloud providers can increase complexity and costs without providing real benefits. If your workload is stable and fits within one cloud provider, it’s often more straightforward and more efficient to optimize for that platform instead of spreading across multiple providers. Multi-cloud is useful when business continuity, compliance, or specific services require it, but not just because it’s trendy.
Databases: Not Every App Needs Big Data
When building an application, it’s tempting to use the latest database technologies like distributed NoSQL databases or real-time analytics engines. However, for many use cases, a traditional relational database like MySQL or PostgreSQL is more than adequate. These databases are reliable, well-documented, and easier to manage for smaller-scale applications. A local library management system doesn’t need a distributed database like Cassandra. A simple SQL database will handle the data efficiently and cost-effectively.
Mobile Apps: Native vs. Cross-Platform
While native mobile app development offers the best performance and access to device-specific features, it’s not always necessary. For many applications, cross-platform frameworks like Flutter or React Native can deliver a great user experience without the need to maintain separate codebases for iOS and Android. However, a small business app for tracking inventory doesn’t need the performance optimization of native development. A cross-platform solution can save time and resources while still meeting the business’s needs.
AI and Machine Learning: Not Every Problem Requires a Smart Solution
Artificial intelligence and machine learning are powerful tools, but they’re not always the right solution. For example, if you’re building a basic chatbot for customer support, a rule-based system might be more than enough. Implementing a full-scale natural language processing (NLP) model would be overkill and could lead to unnecessary costs and complexity. A simple FAQ-based chatbot using predefined responses is often sufficient for small businesses, rather than deploying a complex AI-driven conversational agent.
I am not against the new technology, but I am against the hype. The key to effective technology use is understanding the problem you’re trying to solve and choosing the right tools for the job. Just as you wouldn’t use a Ferrari for a school run, you don’t always need the latest and greatest tech for every application. By focusing on simplicity, cost-effectiveness, and practicality, you can build solutions that are efficient, maintainable, and fit for purpose.
In my experience working with modern web applications, one of the biggest challenges is balancing speed, scalability, and efficiency. No matter how well a database is optimized, as traffic grows, certain bottlenecks start appearing—slow login sessions, delayed dashboard reports, lagging product displays, and inefficient handling of dynamic content. A lot of time, we keep dynamic content in the database with the hope of changing the content when required, but we never need to change the content frequently. In the end, showing the contents from the database increases the DB hit significantly when the traffic increases.
That’s where Redis comes in. The concept is simple, which is putting the frequently used data in place to reduce the DB hit. Over time, I’ve realized that introducing Redis into an application architecture can significantly improve performance and user experience. Whether it’s caching frequently accessed data, improving authentication mechanisms, or optimizing background tasks, Redis has become an essential tool for building high-performance applications.
Let me share some real-world scenarios from my experience where Redis has made a significant impact.
Faster and More Reliable Login Sessions
One of the first places I found Redis helpful was in managing user sessions. If we use stateful (sessions-based) session management for a monolithic application, usually, the server maintains user information(The benefit is using JWC is a different topic). The user data and session information are typically stored on the server, often in a database or memory. This unique session ID is exchanged between the browser and the server with each request, either via cookies or by embedding it in the request URL (less secure). Initially, it will work fine for small traffic but can become a bottleneck as the user base grows. Each login will require reading and writing session data in the database, adding unnecessary load. Scaling across multiple servers means session inconsistencies, as each server has to query the same database. This is where Redis comes in handy. It can store the session data in memory, reducing lookup time significantly. It can handle session expiration without the need for any manual cleanup. With Redis as a centralized session store, all application instances can access the same session data without database hits.
Powering Report Dashboards
Every large web application has some sort of dashboard and Reports. Dashboards that display real-time analytics or reports often require frequent database queries, which can strain the system. Reports involved aggregating millions of records, slowing down the database. Multiple users pulling dashboards/reports at the same time also create locking issues. Also, frequent querying can lead to old data being displayed before the update is completed. Redis can cache this data, reducing the load on the database and ensuring that dashboards load quickly.
In my experience on the reporting dashboard of many platforms, we used Redis to cache frequently accessed data like daily sales, user activity, and inventory levels. This reduced database queries by 80% and allowed the dashboard to update in real time, providing a seamless experience for business users.
Optimizing Product Displays
E-commerce platforms often display product details, reviews, and recommendations. Fetching this data from a database for every request can be slow and resource-intensive. Redis can cache product data, ensuring that pages load instantly. For an online voucher shop, we implemented Redis to cache product details and recommendations. This reduced page load times from 3-4 seconds to under 500 milliseconds, significantly improving the user experience and boosting conversion rates.
Storing Dynamic Content Efficiently
Web applications often serve dynamic content like user-generated posts, comments, or notifications. Storing and retrieving this content from a database can be slow, especially as the application scales. Redis can act as a high-performance data store for such dynamic content. In our employee engagement platforms, we used Redis to store and retrieve user posts and comments. This allowed us to serve content to users in real time, even during peak traffic periods, without overloading the database.
Queueing Background Jobs
Web applications often need to handle background tasks like sending emails, processing payments, or generating reports. Redis can be used as a message broker to queue and manage these tasks efficiently. In some projects that required sending bulk emails to users, we used Redis to queue email jobs. This allowed us to process thousands of emails in the background without affecting the performance of the main application.
Caching Frequently Accessed Data
Redis is an excellent tool for caching frequently accessed data, such as configuration settings, user preferences, or static content. This reduces the need to query the database, improving overall performance repeatedly. In some content management side of some systems, we used Redis to cache website settings and user preferences. This reduced database load and ensured that the application remained fast and responsive, even as the number of users grew.
Redis is more than just a caching tool; it’s a versatile and powerful solution that can address a wide range of challenges in modern web applications. After integrating Redis into multiple areas of our system, it became one of the most essential tools in our tech stack. If you’re building a web application and haven’t yet explored Redis, I highly recommend giving it a try. Its simplicity, speed, and flexibility make it a must-have tool for any developer looking to deliver high-performance, scalable, and user-friendly applications. Redis isn’t just an option—it’s a game-changer.
Businesses are changing fast, and they must adapt quickly to meet customer demands while maintaining operational efficiency. As our company specializes in supplying digital gift cards, we are responsible for delivering thousands of gift cards every hour. Previously, we relied on physical gift cards, which were not only expensive to manage and deliver but also risky, as they could be lost in the post. To address these issues, I was in charge of digitalizing our gift card delivery process and building a robust Advance Voucher Management System (AVMS).
Cost Efficiency: Digital vouchers eliminate the need for manual purchase, management and shipping, significantly reducing costs.
Speed: Digital delivery ensures instant access to gift cards, enhancing customer satisfaction.
Risk Mitigation: Digital vouchers eliminate the risk of physical cards being lost or stolen during delivery.
Challenges in Digital Voucher Management
As we entered the digital gift card market, we discovered a diverse ecosystem of suppliers offering digital gift cards through APIs. Some retailers, like Amazon, had their own proprietary APIs, while others relied on third-party processors. These processors use different technologies, creating potential roadblocks in our voucher delivery system.
Additionally, digital vouchers came in various formats—some as URLs, others as codes, and some requiring a code and PIN combination. Given these complexities, a flexible and scalable architecture was needed. So, microservices architecture came to my mind as a natural solution.
AVMS Architecture
To accommodate two distinct client types(B2C and B2B) requiring vouchers from our ecosystem, I designed a dual API gateway structure:
API Gateway for Internal Application (B2C): This gateway serves our internal applications, such as the Choice platform. For example, when a customer decides to redeem a Choice gift card for an Argos gift card, the Choice platform would connect to this gateway.
API Gateway for 3rd-Party Application:(B2B) This is for external clients who require digital vouchers for their own platforms. This gateway comes with enhanced security and additional microservices to cater to third-party integrations.
Key Microservices in the Voucher Management System
Each gateway is supported by four common core microservices, with an additional service for third-party API connections:
Order Service: This service handles all incoming orders, whether from internal or external platforms.
Catalogue Service: This service provides details about our gift card offering and allows internal teams to manage the retailer catalogue.
Stock Service: This service manages stock levels and works with the order process to ensure availability. Many retailers do not offer API-based gift cards, requiring our team to upload vouchers received via Excel files manually.
The stock service leverages two specialized microservices:
On-Demand Voucher Service: Handle vouchers supplied in real-time through API integrations.
Distribution Service: This service manages voucher delivery via email, offering both standard and customizable templates based on client requirements.
Account Service: This service facilitates account management for external clients, allowing them to configure API connections, create additional accounts, and top up balances.
Given the diverse technologies used by different retailers and processors, isolating them into separate services ensured system resilience. If one retailer or processor faced issues, the rest of the system remained unaffected.
Designing and leading the development of this voucher management system has been an incredibly rewarding experience. The transition from physical to digital gift cards has streamlined operations, reduced costs, and enhanced reliability. By leveraging microservices architecture, we have built a scalable, flexible, and resilient system capable of adapting to the dynamic nature of digital gift card distribution.
As one of our group of companies supplies products to other businesses, mainly insurance companies, we manage a vast network of suppliers and clients. Each supplier provides us with product feeds containing stock and pricing information in different formats such as Excel, CSV, and text files. Similarly, our clients require customized product feeds with specific margins and delivery charges.
Previously, this entire process was manual/semi-auto, requiring significant effort to collect, process, and distribute feeds. Recognizing the inefficiency and scalability challenges, I designed and implemented an automated Azure-based solution to streamline the entire workflow.
Challenges of the Manual System
– Suppliers sent product feeds in different formats, requiring manual processing.
– Product data had to be categorized and updated manually in our database.
– Each client had unique pricing and delivery rules, making feed generation time-consuming.
– The entire process was prone to human errors and delays.
The Azure-Based Automated Solution
To eliminate inefficiencies, I built an end-to-end automated system using Azure services. The solution handles the entire workflow, from file ingestion to client-specific feed distribution, with minimal manual intervention.
Architecture Overview
Data Ingestion
Azure Blob Storage: Suppliers upload product feeds to a designated Blob Storage container. Blob Storage acts as the central repository for all incoming files.
Data Processing
Event Grid: Automatically triggers when a new file is uploaded to Blob Storage. Initiates the processing workflow by invoking Azure Functions
Azure Functions (Serverless): Processes uploaded files based on their format (Excel, CSV, Text). Validates and standardizes the data into a consistent format (e.g., JSON or CSV).
Data Storage
Azure SQL Database / Cosmos DB: Stores standardized product data in a structured format.
Data Enrichment
Azure App Service: Azure Functions or App Service applies business rules:
1. Categorizes products based on predefined rules.
2. Applies client-specific margins and delivery charges (stored in a configuration table or database).
3. Stores the enriched data back in the database.
Feed Generation and Distribution
Azure Function: Azure Functions or Logic Apps generate customized feeds for each client according to client requirements (e.g., CSV, JSON, or Excel). It also stores the generated feeds in a separate Blob Storage container.
Finally, it sends feeds to clients via email or SFTP based on their preferences.
Automation and Orchestration
Azure Monitor and Log Analytics: Tracks system performance and logs errors for troubleshooting.
Security and Compliance
Azure Active Directory (AAD): Authenticates and authorizes users and applications accessing the system.
Azure Key Vault: Securely stores sensitive information like API keys and connection strings.
Data Encryption: Encrypts data at rest (Azure Storage Service Encryption) and in transit (TLS).
By transitioning to an Azure-based solution, we can automate the entire product feed distribution process, significantly improving efficiency and scalability. The architecture leverages serverless components, ensuring cost-effectiveness and flexibility, while Azure’s robust security features guarantee data protection. This solution not only addresses our current challenges but also provides a foundation for future growth and innovation.
Migrating from one dedicated hosting provider to another is a complex task that requires careful planning. Whether you’re moving to a new data center or a different hosting provider, missing even a small step can lead to downtime, data loss, or performance issues. Based on my experience handling multiple hosting migrations, I’ve compiled this comprehensive checklist to help ensure a smooth transition.
Planning
✅Document existing server configurations, applications, and dependencies. ✅Ensure the new hosting environment supports your software stack, OS, and required services. ✅Take full backups of databases, applications, and configurations before starting the migration. ✅Notify stakeholders about potential downtime and plan for minimal disruption. ✅Notify clients or managed partners for a plan to add the new IP address for any whitelisting. ✅Ensure that the new hosting provider meets your security and compliance requirements.
Infrastructure Preparation
✅ Configure the new hosting environment, including OS, storage, and networking. ✅ Set up web servers (IIS, Apache, Nginx), databases, security tools, and dependencies. ✅ Ensure firewalls, DDoS protection, and security policies match the old setup. ✅ Install and configure SSL/TLS certificates to avoid security warnings. ✅ Maintain access control for applications, services, and databases.
Data Migration
✅ Use database replication or backup/restore methods to transfer SQL, NoSQL, or other databases. ✅ Copy all web application files, configurations, and static content to the new server. ✅ Update environment variables, API keys, and connection strings. ✅ If the hosting provider manages emails, transfer email accounts, DNS records, and backups. ✅ Ensure all background tasks, reports, and scheduled scripts are set up correctly.
Domain Configuration
✅ Modify A, CNAME, and MX records to point to the new hosting provider. ✅ Ensure traffic routing in load balancer and failover mechanisms work properly. ✅ If using a CDN, update settings to reflect the new server location. ✅ If specific IPs are whitelisted for API access, update them accordingly. ✅ Ensure outbound emails work correctly after migration.
Testing
✅ Test web applications, APIs, and backend systems for issues. ✅ Ensure that data consistency and relationships remain intact. ✅ Validate firewall rules, authentication mechanisms, and SSL configurations. ✅ Ensure logs are being collected and monitoring tools are configured correctly. ✅Prepare the holding page. ✅ Have key users validate the application before the final switchover.
Final Cutover
✅ Plan for an off-peak cutover to minimize business impact, ideally the weekend if you are migrating corporate projects. ✅ Keep a close eye on CPU, memory, database queries, and server load. ✅ Ensure new backup processes are working correctly. ✅ Once everything is verified, cancel the old hosting subscription to avoid extra costs. ✅ Note any challenges faced during migration to improve future processes.
Things to Consider Outside Technology
✅ Take regular breaks to keep your mind fresh. Do not try to do everything in one go. ✅ Migration is stressful, so boost your energy level with a beverage or other essential boosting process(as you can see my boosting process below).
Migrating hosting can be a high-risk, high-impact process, but a detailed checklist ensures nothing gets overlooked. By carefully planning, testing, and monitoring every step, you can minimize downtime, prevent data loss, and ensure a seamless transition.
Managing a complex hosting environment is never easy, especially when you rely on a managed hosting partner to handle critical infrastructure. While outsourcing hosting can offer benefits like reduced operational overhead, it also comes with its own set of challenges. I experienced this firsthand in a situation that caused consistent application downtime for nearly two years. Here’s what happened, the challenges we faced, and how we ultimately resolved the issue.
The Setup of the Hosting Environment
Our hosting setup was designed to support multiple client applications with high availability and reliability. It included:
– A redundant pair of firewalls for security.
– A redundant pair of Load balancers to distribute traffic efficiently.
– Golden IP addresses are the ones that client applications are pointed to.
– SSL offloading at the load balancer level to handle encryption.
This setup required precise configuration, especially when it came to SSL certificates. Since SSL offloading was handled at the load balancer level, the SSL certificates needed to be installed correctly on the load balancers with the right IP addresses.
The Problem
The biggest challenge was that we didn’t have direct access to the firewalls or load balancers. Everything had to go through our managed hosting partner. Here’s how the process worked:
We need to place the order for a new SSL certificate and go through the application validation process. After that we need to raise a ticket with the hosting partner’s support team to install the SSL certificate.
This seemingly straightforward process became a recurring nightmare.
The Root Cause
The hosting partner had a large support team, and every time we raised a ticket, it was handled by a different engineer. This led to several issues:
Lack of Familiarity with the Setup: Each engineer was unfamiliar with our specific configuration, leading to mistakes like installing the SSL certificate in the wrong place or assigning the wrong IP address.
Missed Documentation: After the first incident, we had a call with the hosting partner and agreed to add a note in the system for engineers to double-check the setup. Unfortunately, the note was overlooked during subsequent incidents.
Repeated Downtime: Every time the SSL certificate was installed incorrectly, it caused application downtime, frustrating both our team and our clients.
The Impact
For two years, we faced the same issue repeatedly. Each incident required hours of troubleshooting, communication with the hosting partner, and damage control with our clients. The lack of control over our own infrastructure made it difficult to resolve the problem proactively.
The Solution
After enduring this cycle for far too long, we decided to take matters into our own hands. We migrated our setup to the cloud, where we could manage the infrastructure ourselves. This gave us full control over our Firewalls and Load Balancers sowe could configure and troubleshoot them directly. We can purchase our SSL certificates and no longer had to rely on a third party to install them correctly. We also implemented robust monitoring tools to detect and resolve issues before they impacted clients.
The move to the cloud not only resolved the SSL certificate issue but also gave us greater flexibility, scalability, and peace of mind.
Lessons Learned
This experience taught me several valuable lessons about managing complex hosting setups with a third-party partner:
Control is Critical: When dealing with complex infrastructure, having direct access and control is essential. Relying entirely on a third party can lead to unnecessary risks and delays.
Documentation is Key: Even with detailed documentation, human error can occur. It’s important to have fail-safes in place to ensure critical steps aren’t missed.
Communication Matters: Clear and consistent communication with your hosting partner is crucial. However, it’s not a substitute for having control over your own environment.
Know When to Change Course: If a setup isn’t working despite repeated efforts, it may be time to explore alternative solutions.
While managed hosting partners can offer convenience, they may not always be the best fit for complex setups. In our case, moving to the cloud and taking ownership of our infrastructure was the turning point that resolved years of frustration. If you’re facing similar challenges, consider whether having more control over your environment could be the solution. After all, when it comes to your applications and clients, you deserve the ability to act quickly and decisively—without relying on someone else to get it right
As our business expanded, we needed a high-availability solution for our SQL databases to ensure reliability and disaster recovery. Initially, we relied on SQL Server Mirroring, which provided redundancy at the database level. However, as our infrastructure expanded, we faced multiple challenges that made mirroring increasingly difficult to manage. Eventually, at the end of 2018, we decided to transition to SQL Server Clustering, which offered better scalability and stability.
So what are the reasons that drove us to change from mirroring to clustering?
Growing Number of Databases and Management Overhead
In SQL Mirroring, each database requires an individual mirroring setup. While this worked well initially, as our business expanded and the number of databases increased, this approach created significant administrative overhead. For example:
Manual Configuration: Setting up mirroring for each new database required manual intervention, which was time-consuming and prone to human error.
Monitoring Complexity: Monitoring the health and synchronization status of multiple mirrored databases became increasingly challenging as the number of databases grew.
Resource Consumption: Each mirrored database consumed additional resources, such as network bandwidth and storage, which added up as the number of databases increased.
In contrast, SQL Clustering operates at the instance level, allowing us to manage multiple databases under a single umbrella. This significantly reduced the administrative burden and streamlined our operations.
Inability to Mirror System Databases
One of the most critical limitations of SQL Mirroring was its inability to mirror system databases (e.g., master, msdb, model, and tempdb). These databases play a vital role in our operations:
Email and Reporting: We rely on the msdb database for SQL Server Agent jobs, which handle tasks like sending emails and generating reports.
Centralized Configuration: The master database stores server-wide configuration settings, logins, and other critical metadata.
Without the ability to mirror system databases, we were forced to rely on a single server for these essential functions. This created a single point of failure, which was unacceptable for our business continuity and disaster recovery plans. SQL Clustering, on the other hand, provides high availability for the entire SQL Server instance, including system databases, ensuring seamless failover and minimal downtime.
Additional Challenges with SQL Mirroring
Beyond the two main reasons, we faced several other practical disadvantages with SQL Mirroring:
Limited Failover Capabilities: SQL Mirroring requires manual intervention for failover in some configurations (e.g., high-performance mode), which can lead to extended downtime during outages.
No Readable Secondary Database: Unlike SQL Clustering or Always On Availability Groups, SQL Mirroring does not allow the secondary database to be used for read-only operations, limiting our ability to offload reporting workloads.
Network Dependency: Mirroring relies heavily on a stable and high-bandwidth network connection. Any network issues could disrupt synchronization, leading to potential data loss or delays.
Deprecation Concerns: Microsoft announced the deprecation of SQL Mirroring in favor of newer technologies like Always On Availability Groups. This made it clear that continuing with mirroring would not be a sustainable long-term solution.
Why SQL Clustering Was the Right Choice
SQL Clustering addressed many of the limitations we faced with SQL Mirroring:
Instance-Level High Availability: Clustering provides failover capabilities at the instance level, ensuring that all databases, including system databases, are protected.
Simplified Management: With clustering, we no longer needed to configure and monitor individual database mirrors, reducing administrative overhead.
Improved Resource Utilization: Clustering allows for better resource allocation and scalability, which is essential as our database environment continues to grow.
Future-Proofing: By adopting SQL Clustering, we aligned ourselves with modern high-availability solutions, ensuring compatibility with future SQL Server updates and features.
Mirroring had become challenging to maintain due to our growing number of databases, lack of system database mirroring, and various performance and management drawbacks. The deprecation of mirroring and the need for a scalable and resilient solution led us to migrate to SQL Server Clustering. SQL Clustering has since proven to be a reliable and efficient choice, enabling us to maintain high availability, streamline management, and support our expanding operations. Yes, SQL clustering is more expensive than Mirroring, but the benefit it brings is worth the extra cost.
The big corporations require solid authentication solutions to ensure secure and seamless access to their applications. Single sign-on plays a crucial role in maintaining data security while providing users with a smooth experience in accessing the platform. For one of our clients who is a large enterprise with 30,000 employeesin 13 different countries, we were tasked to implement a Single Sign-On (SSO) solution to streamline access to a custom application developed by us for them.
The client’s environment relied heavily on Microsoft technologies. We also hosted the application in a Windows environment, so using Active Directory Federation Services (ADFS) was the ideal choice for this process. ADFS not only integrates seamlessly with Microsoft ecosystems but also provides a robust and scalable platform for authentication.
To ensure the solution met the client’s needs, I began with a Proof of Concept (PoC) using ADFS 4.0 on Windows Server 2016 to replicate the client environment. This allowed for validation of the technical feasibility and performance of the solution before full-scale deployment. The authentication mechanism was built on SAML (Security Assertion Markup Language), a widely adopted standard for SSO that ensures secure and interoperable communication between the identity provider (ADFS) and the custom application.
Proof of Concept (PoC)
First, we needed to establish that the proposed process would work with the custom-built application. So, I conducted a Proof of Concept (PoC) to validate the solution. Here’s how I approached it:
Setup an ADFS environment
The first thing that I needed was a replica of our client environment. So, I fired up an old server and deployed ADFS 4.0 on Windows Server 2016 with active directory. As our client uses Windows 2016 with ADFS 4.0, I installed the same software on the machine.
Setting up the ADFS requires a fully qualified domain name. So, I used one of my domains to set up the ADFS name(adfs.mobyshome.com). I quickly organised an SSL certificate for the domain as well.
Once everything was set, the domain was ready for single sign-on via ADFS.
SAML Configuration
I Exported the ADFS metadata and shared it with the application team for integration.
Then worked with the team to configure the application to consume SAML assertions from ADFS
The next task was to Register the custom application as a Relying Party Trust (RP Trust) in ADFS.
And finally configure the SAML endpoints and claims rules to map user attributes (e.g., email, username) to the application.
Testing
I conducted a complete testing process to ensure users could log in using our test Active Directory (AD) credentials. Also, Verified that the SAML assertions were correctly passed to the application.
I involved some of our team members from the testing team to test the other scenarios, such as password expiration, account lockout, and multi-factor authentication (MFA).
The proof of concept was successful, confirming that ADFS could handle the authentication needs of the custom application.
User acceptance test(UAT)
All the necessary documents and information were provided to the client to confirm the proof of concept and ensure the client was happy with the setup and followed the instructions. Once they are satisfied with POC, the preparation starts to set this up in the client staging environment.
Even though we had POC with almost similar environments, our client did not find it easy to replicate the process. The POC was done on one server, but the clients with 30K employees have a different infrastructure. Through continuous communication with the team client to follow the right steps and fine-tune the SAML configuration, the client was able to establish a connection to our staging application.
Going Live
The go-live process was a critical phase, and we took several steps to ensure a smooth transition:
Pre-Launch Checks
Verified that all servers were configured correctly and synchronized.
Conducted a load test to ensure the ADFS farm could handle the expected traffic.
Launch Day
Monitored the ADFS servers and WAP servers closely for any performance issues.
Provided real-time support to the client’s IT team to address any user login issues.
Keeping all stakeholders informed about the process to minimize confusion.
Post-Launch
Collected feedback from users and addressed any concerns.
Monitored system performance and made adjustments as needed.
Implementing ADFS for a client with 30,000 employees was a challenging but rewarding experience. By following a structured approach—from the PoC to the live implementation—we ensured a seamless transition to SSO for their custom application. Thorough testing during the PoC and pre-launch phases eliminated most of the obsrtucle and ensured a smooth go-live. The success of the project depended on careful planning, thorough testing, and effective communication.
For organizations considering ADFS, this experience demonstrates that it is a reliable and scalable solution for enabling SSO in a Microsoft environment. With the right preparation and execution, ADFS can significantly enhance both security and user experience.
After the Choice Platform became available in high street shops (Introducing GiftChoice for Ultimate Gift Card Experience – A Product I am Proud to Lead), it was only a matter of time before it became a target for scammers. It was 4 AM in the USA, and I was in deep sleep enjoying my holiday when my phone buzzed with a message from my lead developer. Something wasn’t right. Our analytics showed that we were issuing more vouchers than actual sales—which meant only one thing: either we were doing something wrong, or we were being scammed!
I jumped out of bed and called my team. After investigating, we found out how the scammers were pulling it off, and honestly, I had to admire their creativity.
How the Scam Worked
Here’s what they were doing:
They’d load up their shopping basket with groceries and4-5 GiftChoice cards.
At the checkout, they made sure the cashier scanned the gift cards first—activating them in our system.
While the cashier scanned the other items, they quickly scratched the gift cards, got the voucher codes, and redeemed them online immediately.
Within seconds, they received their preferred retailer gift card in their email.
Then, they’d pretend they “forgot their wallet” and ask to cancel the purchase.
The cashier could cancel the groceries but not the gift cards because they had already been redeemed.
And the end result? We were losing money by sending them gift cards without a sale.
A Quick Fix at 4 AM
We needed a complete solution, but a clean, fullproof solution required cooperation from retailers, payment processors, and our partners—which would take multiple days. But we couldn’t afford to lose money while waiting for a permanent fix.
So, I came up with a quick and simple hack:
➡️ Add a 30-minute delay before a card could be redeemed.
How This Stopped the Scam Instantly
When a cashier swipes the card, it gets activated in our system, and our redemption process checks if the card is active before processing it. I simply asked my team to add a rule that says a card must be activated for at least 30 minutes before it can be redeemed. Luckily, we managed to add this rule in a stored procedure that was quick to implement without any release procedure.
🚀 Result? The scammers were blocked immediately.
The best part? 99.99% of our real customers do not redeem their gift cards right after buying them, so the delay didn’t affect them at all. But for scammers, it completely ruined their plan.
It wasn’t the most high-tech solution, but sometimes, the simplest ideas work best. This quick fix stopped the fraud overnight and saved us a ton of money. The scam on the Choice platform was a wake-up call, but it also showcased our team’s ability to think on our feet and act decisively under pressure.
Moral of the story? Even on holiday, always be ready to think on your feet.
In the world of web hosting, there is no shortage of tutorials and guides for hosting architectures. However, when it comes to applications based on Microsoft technology that require hosting in IIS, finding a decent, easy-to-follow architecture is surprisingly challenging. The goal is to provide a simple yet effective Windows hosting architecture that serves the purpose of medium-large-sized applications that require scalability and high availability. Depending on the number of users and scale of the application, the configuration can be hardware can be changed, but the architecture should remain the same.
To provide a clear and practical architecture that addresses the core components needed for a functional and reliable setup, the article is divided into 4 areas.
Network Configuration: To discuss how to set up a secure and efficient network infrastructure, including firewalls, load balancers, and DNS settings, to ensure smooth communication between servers and clients.
Web Server Setup: This section will focus on configuring a Windows-based web server (e.g., IIS – Internet Information Services) to host your applications, handle HTTP/HTTPS requests, and optimize performance.
Database Server: A robust hosting architecture requires a reliable database server. We’ll explore how to set up and manage a database server (e.g., Microsoft SQL Server) to store and retrieve data efficiently while ensuring security and scalability.
Backup and Data Recovery: No hosting architecture is complete without a solid backup strategy. We’ll walk through setting up automated backups, storing data securely, and implementing a recovery plan to minimize downtime in case of failures.
By the end of this article, you should have a clear understanding of how to build a simple yet effective Windows hosting architecture that meets your needs.
Network Configuration
A robust network infrastructure is the backbone of any hosting architecture. For a Windows-based hosting environment, ensuring high availability, security, and performance is critical. This is where a redundant pair of active/passive Cisco firewalls and a redundant pair of Layer 7 load balancers come into play. Redundancy is a key principle in designing a reliable hosting architecture. By implementing redundant pairs of firewalls and load balancers, you ensure that your network remains operational even in the event of hardware failures or unexpected issues. Let’s break down why these components are essential and how they contribute to a reliable hosting setup.
Redundant Pair of Active/Passive Cisco Firewalls
Firewalls are the first line of defense in any network, protecting your hosting environment from unauthorized access, malicious attacks, and data breaches. Yes, you can use a WAF or web application firewall for your application. Still, Hardware firewalls are physical devices that serve as a gatekeeper between the network and the external environment, managing traffic and providing security. For large-scale projects, it is better to use a proper firewall to ensure security.
Using a redundant pair of Cisco firewalls in an active/passive configuration ensures:
High Availability: If the active firewall fails, the passive firewall immediately takes over, ensuring uninterrupted protection and minimizing downtime.
Enhanced Security: Cisco firewalls are known for their advanced security features, including intrusion prevention systems (IPS), deep packet inspection, and VPN support, which safeguard your network from external threats.
Scalability: As your hosting needs grow, Cisco firewalls can handle increased traffic and adapt to more complex security requirements.
In an active/passive setup, one firewall actively manages traffic while the other remains on standby, ready to take over in case of a failure. This redundancy is crucial for maintaining uptime and ensuring business continuity.
Redundant Pair of Layer 7 Load Balancers
Load balancers are essential for distributing incoming traffic across multiple web servers, ensuring optimal performance and preventing server overload. Using a redundant pair of Layer 7 load balancers provides the following benefits:
Traffic Distribution: Layer 7 load balancers operate at the application layer, meaning they can make intelligent routing decisions based on content, such as URLs or cookies. This ensures that traffic is evenly distributed across your web servers, improving response times and user experience.
High Availability: Similar to the firewalls, a redundant pair of load balancers ensures that if one fails, the other can seamlessly take over, preventing service disruptions.
Health Monitoring: Layer 7 load balancers can monitor the health of your web servers and automatically route traffic away from any server that is down or underperforming.
SSL Offloading: By handling SSL/TLS termination at the load balancer level, you can reduce the computational load on your web servers, improving overall performance.
Web Server Setup
Once the network infrastructure is in place, the next critical component of your hosting architecture is the web server layer. For a scalable and high-performance Windows hosting environment, it’s recommended to use multiple web servers behind a load balancer. This setup ensures that your application can handle increasing traffic loads while maintaining high availability.
Load-Balanced Multiple Web Servers
Depending on your application load and requirement you can add from 2 to n number of the web server in this layer. You need to ensure that each web server is configured identically to maintain consistency in application behavior.
Here’s why this approach is beneficial:
Scalability: Adding more web servers allows you to scale horizontally, distributing the load across multiple machines. This ensures that your application can handle traffic spikes without performance degradation.
Fault Tolerance: If one web server fails, the load balancer will automatically route traffic to the remaining servers, ensuring uninterrupted service.
Performance Optimization: By distributing traffic evenly, you can reduce the load on individual servers, improving response times and overall user experience.
To implement this, configure your Layer 7 load balancer (discussed in the Network Configuration section) to distribute incoming HTTP/HTTPS requests across all available web servers.
IIS Synchronization for Code Deployment
One of the challenges of managing multiple web servers is ensuring that all servers have the same codebase and configuration. This is where IIS (Internet Information Services) synchronization comes into play. By synchronizing your web servers, you can ensure that any code or configuration changes made on one server are automatically replicated to the others. Here’s how to achieve this:
Shared Configuration: Use IIS’s shared configuration feature to store configuration files (e.g., applicationHost.config) in a central location, such as a network share or a cloud storage service. This ensures that all web servers use the same settings.
Web Deploy Tool: Microsoft’s Web Deploy Tool is a powerful utility that allows you to synchronize websites, applications, and content across multiple servers. When you release new code on one server, Web Deploy can automatically replicate the changes to all other servers in the farm.
Automated Scripts: For advanced setups, you can create scripts (e.g., using PowerShell) to automate the synchronization process. This ensures that code deployments are consistent and error-free.
Database Server
The database is the heart of any application, storing and managing critical data. For a robust Windows hosting architecture, it’s essential to ensure that your database layer is both performant and highly available. There are a few options that you can choose from like Failover cluster, database mirroring, log shipping etc. But two Microsoft SQL Server instances in a failover cluster configuration, provide better redundancy and minimize downtime in case of failures.
Why Use a Failover Cluster?
A failover cluster is a group of servers that work together to provide high availability for applications and services. In the context of Microsoft SQL Server, a failover cluster ensures that if one database server fails, the other server automatically takes over, ensuring uninterrupted access to your data. Here’s why this setup is crucial:
High Availability: By using two SQL Server instances in a failover cluster, you eliminate single points of failure. If the primary server goes down, the secondary server takes over seamlessly, minimizing downtime.
Data Integrity: Failover clusters ensure that your data remains consistent and accessible, even during hardware or software failures.
Scalability: This setup allows you to scale your database layer as your application grows, without compromising on reliability.
Alternative to Failover Cluster: SQL Server Always On Availability Groups
While a failover cluster is a robust solution for high availability, it may not be the best fit for every scenario. For example, setting up a failover cluster requires shared storage and can be complex to configure and maintain. If you’re looking for a more flexible or simpler alternative, SQL Server Always On Availability Groups is an excellent option.
Always On Availability Groups (AGs) are a high-availability and disaster recovery solution introduced in Microsoft SQL Server 2012. They provide database-level redundancy by allowing you to group multiple databases into a single availability group and replicate them across multiple SQL Server instances. Here’s why AGs are a great alternative:
Database-Level Redundancy: Unlike failover clusters, which operate at the instance level, AGs work at the database level. This means you can replicate specific databases rather than the entire SQL Server instance, providing more granular control.
No Shared Storage Required: AGs do not require shared storage, simplifying the infrastructure and reducing costs.
Readable Secondaries: Secondary replicas in an AG can be configured as read-only, allowing you to offload read operations (e.g., reporting or analytics) to the secondary server, improving performance.
Automatic or Manual Failover: AGs support both automatic and manual failover, giving you flexibility in how you manage high availability.
When to Choose Always On Availability Groups Over Failover Clustering
Granular Control: If you only need high availability for specific databases, AGs are a better choice.
Cost Efficiency: AGs eliminate the need for shared storage, reducing infrastructure costs.
Read-Only Workloads: If you want to offload read operations to secondary replicas, AGs provide this capability out of the box.
Other Alternatives
Database Mirroring: An older high-availability feature that provides database-level redundancy. However, it’s deprecated in favor of Always On Availability Groups.
Log Shipping: A simpler solution for disaster recovery, where transaction logs are periodically shipped and applied to a secondary server. While it’s not as robust as AGs or failover clustering, it’s easier to set up and maintain.
Comparison Table
Feature
Failover Clustering
Database Mirroring
Log Shipping
Always On Availability Groups
Scope
Instance-level
Database-level
Database-level
Database-level
Failover
Automatic
Automatic/Manual
Manual
Automatic/Manual
Shared Storage
Required
Not required
Not required
Not required
Cost
High (Enterprise hardware)
Moderate
Low
High (Enterprise Edition)
Complexity
High
Moderate
Low
High
Readable Secondary
No
Yes (with limitations)
No
Yes
Deprecated
No
Yes (since SQL Server 2012)
No
No
Best Use Case
High availability
High availability
Disaster recovery
High availability + Disaster recovery
Dedicated Storage Layer for Backup
No hosting architecture is complete without a reliable backup and recovery plan. Data loss can occur due to hardware failures, software bugs, human errors, or even cyberattacks. To safeguard your data, it’s essential to implement a dedicated storage layer for backups, coupled with a comprehensive backup strategy.
Why a Dedicated Backup Storage Layer?
A dedicated storage layer for backups ensures that your data is securely stored, easily recoverable, and protected from accidental deletion or corruption. Here’s why it’s critical:
Disaster Recovery: In the event of a catastrophic failure, backups allow you to restore your application and data quickly, minimizing downtime.
Compliance: Many industries require businesses to maintain backups for regulatory compliance.
Data Integrity: Regular backups ensure that you can recover from data corruption or accidental deletions.
It is important to Maintain at least three copies of your data (primary + two backups), store them on two different types of media, and keep one copy offsite or in the cloud. Periodically test of your backups are very important to ensure they can be restored successfully.
Using Commvault for Backup
Commvault is a powerful enterprise-grade backup and recovery solution that provides a unified platform for managing backups across on-premises, cloud, and hybrid environments. Here’s how to leverage Commvault for your backup strategy:
Centralized Management: Commvault provides a single interface to manage backups for your entire infrastructure, including databases, web servers, and file systems. You can define backup policies, schedules, and retention periods from a centralized console.
Incremental and Differential Backups:
Commvault supports incremental and differential backups, reducing the amount of data transferred and stored during each backup cycle. This saves storage space and minimizes backup windows.
Application-Aware Backups:
For Microsoft SQL Server, Commvault offers application-aware backups that ensure transaction consistency and enable point-in-time recovery.
Cloud Integration:
Commvault supports backing up data to cloud storage providers like AWS, Azure, and Google Cloud, providing flexibility and scalability.
Automated Recovery Testing:
Commvault allows you to automate recovery testing, ensuring that your backups are valid and can be restored when needed.
Alternative Backup Solutions
If Commvault is not an option, there are other reliable backup tools and strategies you can consider:
Veeam Backup & Replication: A popular backup solution for virtualized environments, Veeam offers features like instant VM recovery, application-aware backups, and cloud integration.
Microsoft Azure Backup: If your infrastructure is hosted on Azure, Azure Backup provides a seamless and scalable solution for backing up VMs, SQL Server, and file systems.
Here’s a quick recap of the architecture:
Redundant pair of Cisco ASAv firewalls (active/passive) for high availability and security. Redundant pair of Layer 7 load balancers to distribute traffic across web servers and ensure fault tolerance.
Multiple load-balanced web servers (e.g., IIS) to handle application traffic. IIS synchronization for seamless code deployment across servers.
Two Microsoft SQL Server instances configured in a failover cluster for high availability. Alternatively, SQL Server Always On Availability Groups for more granular control and flexibility.
Commvault Simpana Enterprise for centralized backup management and disaster recovery.
Below is a simplified diagram of the hosting architecture: