12 Hidden Settings To Enable In Your Claude Code Setup
Chapters17
Claude Code has many features, and fixes for common issues are often hidden in config files and environment variables. The chapter previews a list of hidden settings and open-source solutions to improve behavior beyond the built-in fixes.
Unlock Claude Code’s hidden knobs: tweak retention, read-output limits, sub-agents, hooks, and privacy to tailor performance and security.
Summary
AI LABS’s deep-dive into Claude Code reveals that many useful controls sit buried in config files and environment variables. The team shows practical, real-world tweaks you can enable now, beyond what’s visible in the command menu. Key fixes include extending conversation retention beyond one month via settings.json, adjusting how much terminal output Claude loads, and tailoring sub-agents with hooks, isolation, and background execution. They also outline strategies to handle large files, prevent co-authored commits, and disable telemetry while preserving updates. The video walks through practical steps like setting cleanup periods, configuring path-specific read rules, and forcing preferred libraries through exit-code driven hooks. You’ll also hear about tools like Claude CTX for profile switching and Make.com’s agent-focused workflow features for governance and visibility. Throughout, the host plugs deeper dives on sub-agents, agent teams, and Ralph loops, plus practical caveats when you’re pushing toward a million-token context window. If you’re managing complex Claude Code setups, this episode is a treasure trove of actionable settings for reliability, privacy, and scale.
Key Takeaways
- Extend conversation retention: set cleanup period days to 365 in main.claude/settings.json to keep a year of conversations.
- Increase terminal read limits: raise the read-characters setting from 30K to around 150K to load full command outputs.
- Use sub-agents effectively: deploy dedicated sub aents with the agent flag to delegate tasks without loading the main agent, and control isolation for safe experimentation.
- Enforce hooks and exit codes: implement pre-command hooks that trigger on problematic libraries (e.g., replace pip with uicorn) and use exit code 2 to force Claude to retry with updated behavior.
- Improve file-read handling for large files: add a claude.md rule to read large files in chunks using offset/limit when line counts exceed 2,000.
- Control agent behavior and visibility: configure background, isolation, and tool-spawn permissions to prevent rogue agents and unnecessary spawns.
- Opt out of telemetry while preserving updates: set three main settings to disable telemetry, error reporting, and feedback in settings.json (and understand the CLI trade-offs).
Who Is This For?
This is essential viewing for developers and DevOps engineers who run Claude Code in production, manage large projects, or race against context-window limits. It’s a must-watch for teams seeking reliability, privacy, and fine-grained control over agent behavior and data retention.
Notable Quotes
"We went through all of it and put together a list of hidden settings and flags you should enable right now for the issues that Claude does not have a built-in fix for."
—Intro and thesis: many fixes live in config, not the UI.
"In the main.claude folder, there is a settings.json file. We'll be using this file for a lot of other settings throughout the video as well."
—Identifies where core config lives.
"Any command that dumps a lot into the terminal, Claude only gets the 30,000 characters."
—Explains the read limit and motivation to raise it.
"You can increase this to something like 150,000 so that the full output is actually loaded and Claude can read through all of it properly."
—Practical tweak for large outputs.
"If you want to quickly hand the work to a specific agent, run Claude as a sub aent. You just need to use the agent flag and type in the name of the sub aent you want to run Claude as."
—Demonstrates sub-agent delegation.
Questions This Video Answers
- How do I extend Claude Code's conversation retention beyond 30 days?
- How can I load larger terminal outputs into Claude Code without truncation?
- What are sub aents in Claude Code and how do I use them effectively?
- How do I configure hooks and exit codes to enforce preferred libraries in Claude Code?
- What is Claude CTX and how do profile switching and permissions work across multiple configurations?
Claude CodeClaude CTXsub aentsagent teamscontext windowsettings.jsonmain.claudedotclawHooksexit codes 2 and Ralph loops','prompt stash','telemetry opt-out','Make.com
Full Transcript
Clawed Code has so many features at this point that it's genuinely hard to keep up. Even with everything visible in the command menu, there is a lot that is not immediately apparent. Most of the problems you run into while using Claude Code actually have fixes already built in. They are just buried in config files and environment variables that hardly anyone talks about. We went through all of it and put together a list of hidden settings and flags you should enable right now for the issues that Claude does not have a built-in fix for. We also found some solid open-source solutions.
Now, if you've ever run the insights command or used Claude with the resume flag, you might have noticed that all the conversations that show up are limited to just 1 month, even if you've been using Claude for much longer. And if you actually need to go back to those sessions or want an insight analysis for a longer period now that Opus 4.6 supports a 1 million token context window, you won't be able to do that because Claude Code doesn't store them on the system for longer than a month. Now, this 1 month is the default time span set in Claude's configs for retained data.
But that doesn't mean you can't modify these settings to retain data for longer. Claude actually has a setting for that. In the main.claude folder, there is a settings.json file. We'll be using this file for a lot of other settings throughout the video as well. This is how you change a lot of the default settings in Claude Code. You can add this cleanup period days field with any number of days you want. So, if you set that to 365, it will be able to retain a full year's worth of conversations. And by setting it to zero, you're asking it to store none of your conversations, meaning you won't be able to extract any information or view past references.
Another thing you can do is inside your dotclaw folder of your project, you can configure path specific rules. They are loaded into the context when the agent tries to modify a specific file. These rules are triggered on read operations and are loaded when the path pattern matches the file being read. They contain all of the instructions that need to be followed when working with that file. Normally, this is what people add in the main claw.md. They dump all of the instructions related to different aspects of the app into one place. Although we don't need to worry about context now, it still helps with separation of concerns once your app gets too big.
Putting them all in one place sometimes leads to Claude ignoring instructions you wrote because the file has become so large and full of instructions that Claude doesn't know which ones to actually focus on. For example, if it's working on the front end, it only needs to load the React components instructions, not all of them at the same time. This keeps the agent more focused. As you already know, Claude code can run bash commands and read their outputs. But depending on the command, those outputs can be massive. Enthropic has set a limit on how many characters Claude can actually read from any command's output.
And that limit is 30,000 characters. Anything beyond that gets truncated and Claude never sees it. So, for example, if you run your test suite and it prints thousands of lines of results, Claude is only going to read the set 30,000 characters of that output. Same thing if you're looking at build logs or running database migrations. Any command that dumps a lot into the terminal, Claude only gets the 30,000 characters. To fix this, in your settings.json, there is again a config that controls how many characters Claude code loads from the terminal into its context window. This was set to 30K because of the older 200k context window models where you couldn't afford to load more.
But again, with the new 1 million token window, that's not a problem anymore. You can increase this to something like 150,000. so that the full output is actually loaded and Claude can read through all of it properly. If you are working on a project that contains a lot of sub aents, each tailored towards working on their respective tasks, if we have a task specific for any agent, we normally ask Claude explicitly in our prompt to use that agent to do the task. But if you want to quickly hand the work to a specific agent, what you can do is run Claude as a sub aent.
You just need to use the agent flag and type in the name of the sub aent you want to run Claude as. Now you can delegate tasks to it directly and use its capabilities and tools without the overhead of Claude first loading that sub aent and then performing the task. As you might already know, you can set the model and MCP tools configuration when configuring sub aents. But there are many more configurations you can add to a sub aent. For example, sub aents do not inherit skills by default, but if you use the skill flag, you can make that agent inherit a skill you've created for that specific sub aent.
This means it can actually use that skill to perform its tasks. Aside from skills, there's another flag called effort. If you didn't know, effort determines how much token and thinking power the agent uses when performing tasks. Some agents by default don't need much effort, so you change it based on the task. In addition to effort, you can also configure hooks inside the sub agent that are specific to that agent's workflow. You can also set whether an agent should always run in the background using the background flag. Set it to true if you want the agent to work completely in the background without disrupting the main agent or false if you want the agent to always appear at the top.
You can also have sub agents run in isolation in a separate work tree by setting the isolation config in the agent description. Isolated agents get a temporary copy of the work tree, giving them space to make significant changes without risking the main codebase. If the agent makes no changes, the work tree chains up automatically. If there are changes, the work tree path and branch are returned for merging and review. This setup is best for experimenting with approaches that might break the main codebase. Finally, you can control which agents a given agent is allowed to spawn by adding the permitted agent names in the tools section of that agent's config.
This restricts spawning so that multiple agents aren't created unnecessarily, preventing a single agent from going rogue and continuously spinning up too many others. By default, when Claude reads from a file, it only reads 25K tokens. But ever since the context window increased to 1 million tokens, 25K is actually too small and doesn't let Claude utilize its full potential. You can change this in the settings.json by setting this flag to 100K or more. But there's another catch. No matter how large the context window is, Claude only reads 2,000 lines, and it doesn't even know that it has missed the other lines.
So, it never goes back to read the rest. Anthropic doesn't allow you to change this limit. But there's a workaround. You can add an instruction in the claude.md file so that whenever Claude reads large files, it first checks the line count. If the file exceeds 2,000 lines, it uses offset and limit parameters to read the whole file properly without missing anything in between. We can also configure a hook that is triggered whenever the read command runs. This hook checks the file's line count, and if it exceeds 2,000 lines, it forces the agent to follow the instruction in claude.md using commands like head to ensure Claude reads through to the end.
As you already know, Claude code automatically triggers compact when the context window reaches 95%. Even with the 1 million token context window, the agent doesn't actually need to wait until the context window is 95% full. The quality of output usually starts degrading when the context window fills up to 70%. This is the right time to trigger autocompacting, unless you need the full 1 million context window. To change this, you just need to add a config flag in the settings.json JSON and set the autoco compact percentage override to whichever percent you like. We've set ours at 75%.
Once this is in place, when your context window reaches 75%, it will automatically compact, maintaining the quality of the agents output. But before we move on the next features, let's have a word by our sponsor make.com. We all know the biggest risk with AI is the blackbox. You deploy agents, but you can't verify their decisions. Makes new agents completely change that. Its visual platform combines no code and AI to deploy agents that run your business. You can build intelligent agents directly inside their visual canvas. Just give your agent a goal, and with over 3,000 native app integrations, it handles the complex decision-making for you.
Beyond agents, the platform is packed with features. You get pre-built templates to start fast, MCP for secure connections, and the knowledge feature to ground responses. The reasoning panel lets you actually see, control, and trust every step the AI takes. Plus, with the Make Grid, your monitoring and insights are in one centralized map. Stop doing manual busy work and create efficient workflows that save time and simplify scaling. Click the link in the pinned comment to grab your exclusive 1 month free pro plan and try make today. Now, most of you might already know this, but Agent Teams is still experimental, which is why many people don't know about it.
In Agent Teams, there's one team leader and multiple team members, each being their own clawed sessions that are started and controlled by the team leader. The team leader is responsible for coordinating the whole task across all these team members. This is actually different from sub aents because sub aents aren't able to communicate with each other. Whereas in an agent team, each team member is able to communicate with one another and share information. We've actually created a full video on this where we talk about its features and how to best use it in order to make the most out of its capabilities.
Also, if you are enjoying our content, consider pressing the hype button because it helps us create more content like this and reach out to more people. If you're managing multiple configurations for different types of work, there's an open- source tool called Claude CTX that lets you quickly switch between configured profiles, manage client configurations separately, and handle permissions and tools across the same space providers. To install it, commands are listed for all operating systems. On Mac, you can use the brew install command, and on other systems, you can install it by cloning the repo. The tool manages your settings.json, JSON, Claude, MD, MCP servers, and backups by keeping track of profiles via a profiles folder inside the main.cloud folder.
This profiles folder contains a subfolder for each profile with its own settings.json andclaw.md, each optimized for that particular profile. Each settings file contains only the permissions needed for that profile, so nothing bleeds across into another. Switching profiles is straightforward. You can check your current profile using this C flag. And to switch, you run claude ctx followed by the profile name you want. When you switch, it creates a backup of the current working state and saves it to the backup folder, so you always have a record of the previous profile. This way, you can keep multiple profiles completely separate and have Claude work with exactly the permissions it needs without worrying about them merging with each other.
Resources from all our previous videos are available in AIABS Pro. Templates, skills, and a bunch of other stuff you can just plug straight into your projects. If you found value in what we do and want to support the channel, this is the best way to do it. The links in the description. If you get annoyed when Claude co-authors itself on GitHub commits, there's actually a workaround for that as well. In your settings.json, add the attribution key and leave the commit and PR fields empty. After that, whenever you ask Claude to push to GitHub, it won't co-author itself.
You can also set it to a custom string so the commit shows whatever author name you choose. By default, Claude Code adds itself as a co-author to every commit, which means it shows up in your reposiito's contributor graph. Claude Code also sends analytics data to Statig where it tracks usage patterns and operational data like latency and reliability. This data is used to AB test features and drive analytics. It also sends data to Sentry for error logging, allowing Anthropic to diagnose crashes and bugs in production. But if you want to opt out, you can do that by adding three variables to the main settings.json.
These disable telemetry, error reporting, and feedback display. With these in place, Claude Code will no longer send your data out, keeping it private instead. But there is also a separate CLI flag in Claude Code to disable non-essential traffic, which might look like it does the same thing. The difference is that this flag also blocks auto updates, which you probably don't want. So, it's better to rely on the three settings instead since they give you the same privacy benefit without cutting off updates. A lot of people also don't know about prompt stashing in Clawude Code. If you're typing a prompt and realize you need to send claude code a different task first, you can press Ctrl +S to stash your current prompt.
After that, you can type in and send the new one, and your stashed prompt automatically comes back into the input box. A lot of you might already be using hooks, but you can also use exit codes inside your hooks that tell Claude whether the execution should proceed, be blocked, or be ignored. There are three primary types of exit codes. Exit code zero means that the run was successful, and it indicates that the task assigned was done correctly. Most of the time its outputs are not inserted into the context and serve just as an indicator that this was done correctly.
Any other exit code other than 0 and two is shown in verbose mode and is non-blocking meaning that they are error messages but claude does not consider them serious enough to stop its workflow. But the most important one is exit code 2 which has a significant impact on our workflows. So when we use exit code 2 with any tool, the error message is actually fed back to claude and it is forced to act upon that error message. For example, there are often times when you want to use a certain library, but Claude uses another one because of its training patterns.
To prevent this, you can configure a hook for that and have it run before every bash command. It checks if the command Claude is about to use matches the library you don't want to use, as in my case, it was pip. And then it prints a message telling it not to use pip and directs it to use uicorn instead and exits with code 2. With this in place, whenever Claude tries to install through pip, it will be forced to install through uicorn instead. These hooks with exit codes form the basis of Ralph loops which you might remember were gaining a lot of traction a little while back.
We also made a video on them in detail which you can check out on our channel. They use the same mechanism of exit codes and hooks to force Claude to keep iterating until the criteria for a complete output has been met. This ensures that Claude doesn't slack off and mark incomplete tasks as complete. These hooks can help in creating multiple similar workflows. That brings us to the end of this video. If you'd like to support the channel and help us keep making videos like this, you can do so by using the super thanks button below.
As always, thank you for watching and I'll see you in the next one.
More from AI LABS
Get daily recaps from
AI LABS
AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.








