The social media platform has quietly rolled out a set of technical safeguards around its Grok chatbot, aimed at stopping the generation of sexually explicit synthetic media. The move follows a wave of international concern and a formal inquiry by Britain's communications watchdog, which is probing whether the service has crossed legal lines. Engineers have introduced layers of content analysis that flag and block requests that could be turned into deepfake material, effectively narrowing the tool's creative latitude. Observers note that the changes arrive at a moment when the broader AI ecosystem is grappling with calls for tighter oversight, and the platform's leadership appears eager to demonstrate compliance without stifling the conversational appeal that made Grok popular. While the new barriers are likely to reshape user interactions, they also signal a growing willingness among tech firms to adapt quickly when regulatory pressure mounts.