Today’s CISOs face a daunting array of security threats. From ransomware and cloud misconfigurations to zero-day exploits and code vulnerabilities, the stakes have never been higher. Join our cloud security expert engineers for an enlightening webinar that delves deep into the state of cloud security in 2023. Learn about the best tools and practices that […]
This blog is based on a closed roundtable discussion on Zero Trust and email security, where practitioners shared how these controls are implemented and operated in real-world environments. Not a conference. Not a product showcase. Just open conversations about what works, what breaks, and what needs to be adapted once theory meets reality.
Across very different organizations and use cases, a consistent pattern emerged. The discussions rarely centered on ideal architectures or “best models.” Instead, they focused on practical adaptations shaped by existing systems, operational constraints, and the realities of day-to-day security work.
Book a demo today to see GlobalDots is action.
Optimize cloud costs, control spend, and automate for deeper insights and efficiency.
Zero Trust as an Operational Layer
Zero Trust is often introduced as an access problem: who can reach which application, from where, and under which conditions. In practice, many organizations already have this layer in place. Identity providers, SSO, and basic access policies are no longer the hard part.
The real challenges appear later.
Once basic access works, Zero Trust starts behaving less like an authentication mechanism and more like an operational layer. Teams begin using it to solve problems that sit between applications, users, and networks: handling traffic logic, enforcing controls without changing application code, and fixing integration gaps that would otherwise slow teams down.
Several discussions described scenarios where application behavior needed to change, but the application itself was owned by an external vendor or could not be modified quickly. Instead of waiting for backend changes, logic was applied at the access and traffic layer. Requests were adjusted, headers were injected, or routing decisions were made upstream, allowing the application to continue functioning without disruption.
What made this possible was not Zero Trust “as a concept”, but the fact that access control and traffic handling lived on the same operational layer.
Email Security Beyond Initial Detection
Email remains a primary attack vector, largely because it relies on the human factor. At the same time, most modern email platforms already do a reasonably good job of identifying clearly malicious messages.
The bottleneck is no longer basic detection.
What repeatedly came up in the discussions was the grey area: emails that are suspicious but not obviously malicious, partial impersonation attempts, or activity that only becomes meaningful when viewed across multiple signals and over time.
In these cases, visibility alone does not help. Dashboards fill up quickly, and analysts are left with large volumes of tagged messages but little guidance on where to focus.
The real challenge is not “seeing more”, but turning email signals into decisions that fit existing SOC workflows.
Integrating Email Signals into SOC Workflows
One approach described during the roundtable relied on keeping the existing email gateway and SOC tooling intact, while introducing email security as an additional signal layer.
Emails continued to arrive at the primary email platform, while a parallel analysis flow applied classifications such as malicious, suspicious, or spoofed. Clearly, malicious messages were handled automatically. Everything else was retained as signal.
All relevant telemetry was forwarded into the organization’s SIEM alongside other security data. However, the team emphasized that dashboards alone were not enough, especially given the volume of messages tagged as suspicious.
To make this data usable, internal systems were introduced to assist analysts. These tools summarized activity over time, clustered related events, and highlighted patterns that suggested coordinated campaigns.
Importantly, these outputs were reviewed manually. The tools were not treated as autonomous decision-makers. Their role was to reduce volume and cognitive load, allowing analysts to spend their time on investigations that actually required human judgment.
Use of Internal AI in SOC Operations
Several teams described using internal AI-based tools to support investigations, but the emphasis was consistent: AI was not there to replace analysts.
Instead, it was used to:
- Summarize large datasets
- Group-related events
- Highlight anomalies or emerging campaigns
- Suggest areas that deserve closer inspection
Participants were explicit about the limitations. Outputs were checked, results were not assumed to be complete, and the systems were continuously tuned. The value came from direction, not automation for its own sake.
This framing helped avoid a common trap: treating AI as a silver bullet rather than as a practical aid for overburdened teams.
Environmental and Operational Constraints
One of the strongest themes across the sessions was the role of constraints.
In some environments, assets were not located inside the organizational network, but deep inside customer environments. Networks could not be modified. Firewall ports could not be opened. VPN infrastructure could not be installed. On-site access was not feasible, and the scale involved tens of thousands of deployed devices.
Under these conditions, Zero Trust was not adopted because it was modern or elegant. It was adopted because there was no other viable option.
In this context, Zero Trust was effectively applied in reverse: internal users needed controlled access to external assets, while respecting strict customer boundaries. Automation became mandatory. Manual onboarding, per-device configuration, or per-user exceptions simply did not scale.
As deployments grew, teams encountered real platform limits: application counts, URL limits, tunnel limits, and DNS limits. Initial designs had to be revisited. Architectures evolved. Domain models were adjusted. Assumptions were challenged mid-rollout.
None of this resembled a clean reference diagram. It did resemble real engineering work.
Zero Trust Access Management in Practice
Operating Zero Trust at scale introduced its own operational realities.
Identity remained the anchor, with access decisions tied to group membership and device posture. Production access was denied by default and granted only through time-bound approvals. Device health was continuously evaluated, not just at login.
Occasional synchronization delays occurred between identity systems and access policies. Notably, these issues tended to result in temporary access restrictions for legitimate users, rather than unintended access. Ongoing tuning, clear role definitions, and careful group design were essential to keeping the system usable.
In parallel, teams addressed practical field problems. Some external services required access from specific geographic locations. Rather than allowing unmanaged VPN software, controlled egress paths were introduced, offering approved geographic access while preserving visibility and policy enforcement.
Again, the goal was not perfection. It was to remove friction without introducing new risk.
Traffic Control, DLP, and Privileged Access
Across all discussions, one idea kept resurfacing: the most effective controls are often the least visible.
Restricting access to organizational accounts instead of personal ones removed entire classes of data leakage risk. Deciding deliberately where traffic inspection should and should not occur defines the boundaries of visibility. Applying auditing to privileged access improved accountability without changing how teams worked day to day.
These controls did not redefine workflows. They supported them quietly, in the background.
Operational Outcomes and Observations
What was noticeably absent from the roundtable was the idea of a perfect Zero Trust or email security model.
What emerged instead were practical adaptations shaped by constraints: existing systems, organizational structure, scale, and human behavior. Success was not measured by architectural purity, but by whether controls aligned with how organizations actually operate.
In real environments, security works best when it fits.