OpenTelemetry support in Deno Subhosting — observability at scale

Deno| 00:06:29|Mar 13, 2026
Chapters10
An introduction to subhosting as a feature of Dino Deploy for running apps and untrusted code in a scalable hosted environment.

OpenTelemetry support in Deno Deploy subhosting enables automatic instrumentation and OTLP firehose export to observability platforms like Honeycomb for end-to-end tracing.

Summary

Deno showcases how subhosting within Dino Deploy can run untrusted user code—ranging from small JavaScript snippets to full Next.js apps—securely and at scale. The host explains that you can integrate with existing SAS workflows, letting users write custom functions or deploy entire apps. A core pain point addressed is observability across potentially hundreds of thousands of deployed apps. Dino Deploy ships with auto-instrumentation for common APIs (console.log, outbound fetches, incoming HTTP requests) and automatically creates traces that tie logs to the relevant trace IDs. Users can also add custom OpenTelemetry instrumentation to generate bespoke spans, traces, metrics, and logs without special APIs. For exporting, the dashboard supports a firehose endpoint (OTLP fire hose) to forward data to your observability platform, enabling a single, continuous trace across SAS infrastructure and Dino Deploy. The presenter demonstrates configuring a Honeycomb endpoint and authentication, then verifies that both logs and traces appear in Honeycomb alongside the Dino Deploy dashboard data. In short, you get end-to-end observability for subhosting deployments, with easy integration into existing observability tooling.

Key Takeaways

  • Dino Deploy offers built-in OpenTelemetry support and automatic instrumentation for APIs such as console.log, outbound fetches, and incoming HTTP requests, generating traces and associating logs with trace IDs.
  • You can create custom spans, traces, metrics, and logs using standard OpenTelemetry APIs without needing special Dino Deploy-specific calls.
  • Data from Dino Deploy can be exported via an OTLP fire hose to your existing observability platform, preserving end-to-end context across SAS and subhosting components.
  • Setting up export is straightforward: configure the OTLP endpoint (e.g., api.honeycomb.io) and authenticate using the X Honeycomb team header, then data flows automatically.
  • The fire hose export validates by showing both logs and traces in Honeycomb that reflect the same incoming/outgoing request sequence and latency details seen in the Dino Deploy dashboard.

Who Is This For?

Essential viewing for developers using Deno Deploy subhosting who want real-world guidance on instrumenting and exporting telemetry to external observability platforms like Honeycomb.

Notable Quotes

"Subhosting is a feature of dino deploy that lets you um programmatically create applications on our infrastructure to run untrusted user code."
Intro describes the purpose of subhosting and its use cases.
"You can deploy full applications. Uh maybe you are a vibe coding platform and you're generating full NexJS applications using an LLM."
Example of the breadth of deployable workloads.
"we support you specifying a URL that we can send the entire open telemetry fire hose of all of this data"
Explains OTLP fire hose export setup.
"The traces contain the same information that I just saw in the Dino Deploy dashboard earlier"
Demonstrates parity of data between dashboard and external observability tool.

Questions This Video Answers

  • How does OpenTelemetry auto-instrumentation work in Deno Deploy subhosting?
  • What steps are needed to export Dino Deploy telemetry to Honeycomb using OTLP?
  • Can I mix auto-instrumented data with custom OpenTelemetry instrumentation in Dino Deploy?
  • How do I visualize end-to-end traces that span SAS infrastructure and Dino Deploy?
Deno DeployOpenTelemetrySubhostingOTLP FirehoseObservabilityHoneycombTracingLoggingAuto-instrumentation
Full Transcript
Subhosting is a feature of dino deploy that lets you um programmatically create applications on our infrastructure to run untrusted user code um generated by LLMs or or written by your users as part of your application. Uh when would you use this then? For example, um maybe your user can write some hooks inside of your existing SAS application. Maybe you have a billing application and your user wants to be able to write custom billing functions or they want to interact with their database or like cases where maybe you could have traditionally used web hooks. You can have the user directly write JavaScript functions like right right inside your application. And I is it limited to uh functions. No, absolutely not. You can deploy full applications. Uh maybe you are a vibe coding platform and you're generating full NexJS applications using an LLM. Um you can deploy those directly using Dino subhosting and we will execute them for you securely globally deployed with automatic scaling and all that kind of stuff. Yeah. So it it's it's like a hosting platform for platforms. That's right. Yeah. What about understanding what is happening inside of these applications? Yeah, so that's one of the biggest challenges with subhosting, right? You have maybe hundreds of thousands of applications that you've deployed at any given time for a bunch of different users and you want to really understand what's going on in those. Um, you want to be able to get their logs to be able to show errors to users once they've they've written some code. Maybe they have a syntax error. You want to expose that to users. You want to have traces, right? Maybe you have an existing SAS application that is generating traces. want to that you want those traces to continue into your um code for these functions or or pages. Um and really you want metrics too because you want to know if something goes wrong, if the error rate increases, somebody needs to get paged to to fix the problem. So how how does how does Dino deploy help you manage this? So Dino deploy has sort of two things which really help with this. One of them is built-in open telemetry support. Um and the second one is exporting of that open telemetry data to your own telemetry platforms like observability platforms um using a fire hose like OTLP fire hose. Um let's start with the first one. So Dino has a bunch of existing instrumentation for a bunch of APIs that we expose like console.log log or outbound fetches incoming HTTP requests where we generate spans for incoming HTTP requests associated with the trace ids that are in the in the incoming request or maybe we generate a new trace ID and we can attach console.logs to those traces um using the console.log log API and various actions also um create child spans of that um trace for example outbound fetch requests um which we will automatically trace and I can actually demo this in this little application that I have here um which is doing a couple of console logs in an outbound fetch and looking at the logs page here um I can see that Dino has obviously automatically captured all of the logs but even more interestingly has um automatically captured a bunch of traces um for here's the incoming request and then the outgoing request to example.com and also the logs associated to those um are are all associated with that trace. So you can really have very good observability of every given every single request um using this like automatic instrumentation. So okay so Dino deploy has auto instrumented uh JavaScript and can you can view all sorts of data within the Dino deploy dashboard but uh yeah how how do you exfiltrate that data? Well, even before we before we go into exfiltrating data, you can also like set up custom open telemetry data inside of Dino deploy, right? You can import add open telemetry/ API for example and create custom spans and custom traces, custom metrics, custom logs um just the same way as you normally would with open telemetry, right? You don't need to use any special APIs. Exactly. Um but yeah, so here the question is now how do you get that data into your existing observability platform? Because maybe you have an existing observability platform. You almost certainly do if if you are already a SAS, right? Exactly. Um and what you want is requests that travel from your SAS infrastructure into Dino Deploy and maybe back into your SAS infrastructure. Maybe the user is making API call to you. You want that to be like one continuous trace where you can see this entire chain of events going on without interruptions. Um like inside of Dino Deploy, the subhosting platform is just a small part of your overall business. Right. Exactly. Um so yeah, Dino deploy has this fire hose support. We support you specifying a URL that we can send the entire open telemetry fire hose of all of this data that we've collected both the auto in auto instrumented one and the manually instrumented data to you and then your system can ingest that and display that together with any existing data that you have for that request. So you can build full traces both including the dino deploy part of that trace and everything around that um from applications that you already have. Um should I set it up? Let's let's check it out. So, um, I've set up a little, uh, Honeycomb environment here. Um, just signed up to to Honeycomb. Um, and the first thing that it prompts you to do is send data to it. There's no data in here right now. Um, how do we do this? Well, we head over to the Dino Deploy dashboard and go to the settings page for for the organization. And I can scroll down to the open telemetry endpoint um, and configure this endpoint here. So, in the honeycomb docs, um, there is a endpoint URL that I can use api.honeycom.io. I can enter that into the the Dino deploy dashboard and then I just have to set up my authentication using this X Honeycomb team header. Um let me specify that here and paste in the value and save that. And that's it. Now the um fire hose is active. So any new telemetry that is generated by Dino deploy going from this point onwards inside of the subhosting organization will be automatically sent to honeycom. any of your apps receive requests that generates telemetry data which is then piped out through this fire hose into Honeycomb. Yeah, exactly. So, let's try that out. Um, I can just go visit that same URL from before, that same application that was generating, um, logs and I'll refresh that a couple of times and head back over to Honeycomb. And if I explore this data, um, I can obviously see the logs here, which is great. So, that's working. But I can also see the traces. Um, and the traces contain the same information that I just saw in the Dino Deploy dashboard earlier with the incoming and outgoing um, requests. So, the incoming HTTP request, the outgoing HTTP request, and I have all the same fields that are available to me inside of Dino Deploy in my own observability platform now. So, that's the incoming request URLs, um, outgoing request URLs, latency information, all that kind of stuff um, is now piped into my observability platform through this fire hose. Awesome.

Get daily recaps from
Deno

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.