This Datadog Fundamental study guide will help you prepare for the Datadog Fundamental exam with all the required resoruces. Datadog Fundamental is the Datadog certification that tests the core knowledge to use the Datadog platform effectively.
In this article, I will go through all the resoruces that can help you prepare for the CKAD exam.
Note: The author of this blog are Datadog certified.
Table of Contents
Table of ContentsWhat is the Datadog Fundamental exam?Datadog Fundamental Exam Preparation GuideComputer FundamentalsInfrastructure DevelopmentNetworking & Agent ConfigurationData CollectionTroubleshooting DatadogData Visualization & UtilizationDatadog Fundamental Certification ResorucesIntroduction to Observability
What is the Datadog Fundamental exam?
The official Datadog certification page says:
Datadog Fundamentals is our foundational certification offering. This exam tests core knowledge required to use the platform effectively. Knowledge covered includes basic computer fundamentals, infrastructure deployment with Datadog, networking and Datadog Agent configuration, data collection, troubleshooting the Datadog Agent, and data visualization and utilization.
Datadog Fundamental is the initial certification for Datadog newcomers. It is aimed for engineers that is interested in the
Datadog Fundamental Exam Preparation Guide
This section will go over the complete resources and official Datadog documentation pages that can help you prepare for the exam better.
Content Outline:
Computer Fundamentals
- Config File Modification
Some programs require you to configured via editing a text file for the software to run as you wish.
These text files configure the software and are — unsuprusingly enought — called “config files”.
Config files are essentially editable text fiels that contain information required for the successful operation of a program. The files are structured in a particular way, formatted to be user configurable.
Config files are structured in a format of the developer’s own design. Others use known standards used to structure data, like: JSON (JavaScript Object Notation, YAML (YAML Ain’t Markup Language), and XML (eXtensible Markup Language).
Some programs load the information stored in their config files when they start. Meanwhile other periodically check the config file to see it has been changed.
- Operating Systems
- Programming Languages
- Hardware Concepts
- Shell
- Metadata
- Networking
Notes:
- What backend language supported by Datadog
- Something about swap in memory, what it is used for
- And about page fault, where it is happen in computer?
- Networking talks about subnet mask, there are 2 questions there
Infrastructure Development
- Agent Installation
The Datadog Agent is software that runs on your hosts. It collects events and metrics from hosts and sends them to Datadog.
It is recommended to fully install the Agent.
- API Key
- Application Key
Application keys, in conjunction with your organization’s API key, give users access to Datadog’s programmatic API. [*]
Application keys are associated with the user account that created them and by default have the permission and scopes of the user who created them. [*]
If a user’s account is disabled, any application keys that the user created are revoked. Any API keys that were created by the disabled account are not deleted, and are still valid. [*]
The Datadog API is an HTTP REST API. The API uses resource-oriented URLs to call the API, uses status codes to indicate the success or failure of requests, return JSON from all requests, and uses standard HTTP response code. [*]
Authenticate to API with an API key using the headerDD-API-KEY
. For some endpoints, you also need anApplication key
, which uses the headerDD-APPLICATION-KEY
. [*]
Use the Datadog HTTP API to access the Datadog platform programmatically. You can use the API to send data to Datadog, build data visualizations, and manage your account.
- Running the Agent
In Ubuntu, it is reccomended to use the service manager
sudo service datadog-agent status
meanwhile systemctl
is used in Red Hat system. - Agent Hostname
The Agent can send its own configuration to Datadog to be displayed in theAgent Configuration
section of the host detail panel. [*]
The Agent configuration is scrubbed of any sensitive information and only contains configuration you’ve set using the configruation file or environment variables. The configuration changes are updated every 10 minutes. [*]
Notes:
- Datadog reccomendation when installing Agent on Ubuntu / Linux system? One-line command?
- API Key formal definition, something like “alphanumeric string bla bla” or “unique identifier bla bla” which one?
- Different between API Key and Application Key? What is the use of Application Key?
- What happen to API Key and Application Key when user create it removed from organization
- Canonical hostname
Networking & Agent Configuration
- Datadog Ports
Open the following ports to benefit from all the Agent functionalities:
Outbound:
Functionality | Port / Protocol |
Agent APM Containers Live Processes Metrics | 443 / TCP |
Custom Agent Autoscaling | 8443 / TCP |
Log Collection | 10516 / TCP |
NTP | 123 / UDP |
Inbound:
Functionality | Port / Protocol |
Agent Browser GUI | 5002 / TCP |
APM receiver | 8126 / TCP |
DogStatsD | 8125 / UDP |
go_expvar server (APM) | 5012 / TCP |
go_expvar integration server | 5000 / TCP |
IPC API | 5001 / TCP |
Process Agent debug | 6062 / TCP |
Process Agent runtime | 6162 / TCP |
- Datadog IP Addresses
- Auto-discovery
Notes:
- Port to check metrics or something, port 5001? And what each port use for?
- Something like https://ip-addresses.datadoghq.com is come out in exam. What is that?
- So it seems the URL: https://ip-ranges.datadoghq.com/ is used to get information about Datadog IP ranges [*]
- The topic is under API Reference so I may need to be familiar a little bit with some of API used on Datadog [*]
- How to specify auto-discovery. Something like discovering specific container nginx, how to define it on configuration file?
Data Collection
- DogStatsD
- Crawlers
- Agent Integrations
- API Endpoints
- Tagging Best Practices
- Metrics & Timeseries
Notes:
- What feature taking from StatsD by DogStatsD?
- What is Crawlers? It come out often actually. From different between crawlers and agent-based integrations, about its functionality which is to push metrics or to pull metrics?
- All above concepts must can be explain deliberately by just hear the terms
Troubleshooting Datadog
- Agent Commands
- Agent Logs
- Agent Config Files
Notes:
- There are some command that show up on option: flare, diagnose, check, configchek
- What are all commands available? I should familiar with some of it
- How to inspect logs
- And the directory structure, I should familiar. It come out often: cheks, conf.d, whether cheks.d is exist or not I’m not sure back then
Data Visualization & Utilization
- Host Map
- Dashboards
- Using Metrics
- Using Tags
- Monitors and Alerts
Notes:
- Host Map use for highlight or visualize
- Basic terms for Dashboard, its functionality like event report
- NoProxy in datadog config, is it use for metrics?
- Al kind of monitors and alerts need hands-on practice
Datadog Fundamental Certification Resoruces
Introduction to Observability
What is Monitoring?
Monitoring is the process of gathering data to understand what’s going on inside of your infrastructure.
- Monitoring is the act of paying attention to the patterns that your metrics are telling you. It’s about analyzing your data and acting on it.
What do we monitor?
- Performance
By watching performance we can watch how our architecture and applications are using the resources that are available
- Security
Is something going wrong in our environment? Creating monitors around security metrics can stop incidents in their tracks.
- Usage
What are our users doing in our environment? Are they interacting with our products?
Alerting
Alerts are simply setting a threshold in a monitor. When that threshold is breached, a notification is sent to the designated recipient
What is Observability?
Observability is taking the same data that you’ve collected and moving beyond “What is happening?” to “Why is it happening?”
Three Pillars of Observability
Metrics
These data points are numerical values that can track anything about your environment over time, from latency to error rates to user signups
Example of Metrics:
[17.82, 22:11:01] [ 6.38, 22:11:12] [ 2.87, 22:11:38] [ 7.06, 22:12:00]
Visualizing Metrics:
Why do we collect metrics?
- Baseline for Operations
Metrics can tell us what normal looks like for our applications. Without metrics, we’re stuck guessing what’s going on.
- Reactive Responses
Using metrics we don’t have to wait until a customer reports an outage. We can react to issues in our environment before they snowball
- Proactive Responses
Why wait for something to go wrong? By looking at metrics we can get ahead of problems before they happen.
Logs
A computer generated file that contains time stamped information about the usage of that system
Traces
Used to track the time spent by an application processing a request and the status of this request
What is Datadog?
Datadog is an observability platform for cloud-scale applications, providing monitoring of servers, databases, tools, and services, through a SaaS-based data analytics platform