Doc improvement v0.3.0 (#173)

This commit is contained in:
Thibault "bui" Koechlin 2020-08-05 11:24:34 +02:00 committed by GitHub
parent 911d2d6d5c
commit fbebee01d3
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
21 changed files with 176 additions and 213 deletions

View file

@ -1,13 +1,13 @@
---
name: Feature request
about: Suggest an idea for this project
title: Improvment/
title: Improvement/
labels: enhancement
assignees: ''
---
Please, start your issue name (after `improvment`) with the component name impacted by this feature request and a small description of the FR. Example: `Improvment/cscli: add this feature ....` and remove this line :)
Please, start your issue name (after `improvement`) with the component name impacted by this feature request and a small description of the FR. Example: `Improvement/cscli: add this feature ....` and remove this line :)
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

View file

@ -2,10 +2,10 @@ categories:
- title: 'New Features'
labels:
- 'new feature'
- title: 'Improvments'
- title: 'Improvements'
labels:
- 'enhancement'
- 'improvment'
- 'improvement'
- title: 'Bug Fixes'
labels:
- 'fix'
@ -13,7 +13,7 @@ categories:
- 'bug'
- title: 'Documentation'
labels:
- 'documention'
- 'documentation'
- 'doc'
tag-template: "- $TITLE @$AUTHOR (#$NUMBER)"
template: |

Binary file not shown.

After

Width:  |  Height:  |  Size: 160 KiB

View file

@ -38,9 +38,11 @@ And 64 records from API, 32 distinct AS, 19 distinct countries
- `EXPIRATION` is the time left on remediation
## Remove a ban
Check [command usage](/cscli/cscli_ban_list/) for additional filtering and output control flags.
## Delete a ban
> delete the ban on IP `1.2.3.4`
```bash
@ -69,4 +71,17 @@ And 64 records from API, 32 distinct AS, 19 distinct countries
```
## Flush all existing bans
> Flush all the existing bans
```bash
{{cli.bin}} ban flush
```
!!! warning
This will as well remove any existing ban

View file

@ -1,4 +1,4 @@
{{cli.bin}} allows you install, list, update/upgrade and remove configurations : parsers, enrichment, scenarios.
{{cli.bin}} allows you install, list, upgrade and remove configurations : parsers, enrichment, scenarios.
The various parsers, enrichers and scenarios installed on your machine makes a coherent ensemble to provide detection capabilities.

View file

@ -13,30 +13,28 @@
{{crowdsec.Name}} is under [MIT license]({{crowdsec.url}}/blob/master/LICENSE)
## How fast is it ?
{{crowdsec.name}} can easily handle 5k+ EP/s on a rich pipeline (multiple parsers, geoip enrichment, scenarios and so on). Logs are a good fit for sharding by default, so it is definitely the way to go if you need to handle higher throughput.
If you need help for large scale deployment, please get in touch with us on the {{doc.discourse}}, we love challenges ;)
## Is there any performance impact ?
As {{crowdsec.name}} only works on logs, it shouldn't impact your production.
When it comes to {{blockers.name}}, it should perform **one** request to the database when a **new** IP is discovered thus have minimal performance impact.
## Which information is shared from my logs ?
## Which information is sent to the APIs ?
Our aim is to build a strong community that can share malevolent attackers IPs, for that we need to collect the bans triggered locally by each user.
The signal sent by your {{crowdsec.name}} to the central API only contains meta-data about the attack, including :
The signal sent by your {{crowdsec.name}} to the central API only contains only meta-data about the attack :
- Attacker IP
- Scenario name
- Time of start/end of attack
You can find the specific list [here]({{crowdsec.url}}/blob/master/pkg/types/signal_occurence.go)
Your logs are not sent to our central API, only meta-data about blocked attacks will be.
## What is the performance impact ?
As {{crowdsec.name}} only works on logs, it shouldn't impact your production.
When it comes to {{blockers.name}}, it should perform **one** request to the database when a **new** IP is discovered thus have minimal performance impact.
## How fast is it ?
{{crowdsec.name}} can easily handle several thousands of events per second on a rich pipeline (multiple parsers, geoip enrichment, scenarios and so on). Logs are a good fit for sharding by default, so it is definitely the way to go if you need to handle higher throughput.
If you need help for large scale deployment, please get in touch with us on the {{doc.discourse}}, we love challenges ;)
## What backend database does {{crowdsec.Name}} supports and how to switch ?
@ -46,6 +44,13 @@ See [backend configuration](/references/output/#switching-backend-database) for
SQLite is the default backend as it's suitable for standalone/single-machine setups.
On the other hand, MySQL is more suitable for distributed architectures where blockers across the applicative stack need to access a centralized ban database.
## How to control granularity of actions ? (whitelists, learning etc.)
{{crowdsec.name}} support both [whitelists]((/write_configurations/whitelist/) and [learning](/guide/crowdsec/simulation/) :
- Whitelists allows you to "discard" events or overflows
- Learning allows you to simply cancel the decision that is going to be taken, but keep track of it
## How to add whitelists ?
You can follow this [guide](/write_configurations/whitelist/)
@ -68,13 +73,19 @@ Defaults env_keep += "HTTP_PROXY HTTPS_PROXY NO_PROXY"
To report a bug, please open an issue on the [repository]({{crowdsec.bugreport}})
## What about false positives ?
Several initiatives have been taken to tackle the false positives approach as early as possible :
- The scenarios published on the hub are tailored to favor low false positive rates
- You can find [generic whitelists](https://hub.crowdsec.net/author/crowdsecurity/collections/whitelist-good-actors) that should allow to cover most common cases (SEO whitelists, CDN whitelists etc.)
- The [simulation configuration](/guide/crowdsec/simulation/) allows you to keep a tight control over scenario and their false positives
## I need some help
Feel free to ask for some help to the {{doc.community}}.
## Who's stronger : elephant or hippopotamus ?
[The answer](https://www.quora.com/Which-animal-is-stronger-the-elephant-or-the-hippopotamus)
<!--

View file

@ -21,7 +21,7 @@ You can as well [write your own](/write_configurations/parsers/) !
Enrichment is the action of adding extra context to an event based on the information we already have, so that better decision can later be taken. In most cases, you should be able to find the relevant enrichers on our {{hub.htmlname}}.
The most common/simple type of enrichment would be geoip-enrichment of an event (adding information such as : origin country, origin AS and origin IP range to an event).
A common/simple type of enrichment would be geoip-enrichment of an event (adding information such as : origin country, origin AS and origin IP range to an event).
Once again, you should be able to find the ones you're looking for on the {{hub.htmlname}} !

View file

@ -1,19 +1,4 @@
## Finding configurations
{{crowdsec.Name}} efficiency is dictated by installed parsers and scenarios, so [take a look at the {{hub.name}}]({{hub.url}}) to find the appropriated ones !
If you didn't perform the setup with the wizard, or if you are reading logs from other machines, you will have to pick the right {{collections.htmlname}}. This will ensure that {{crowdsec.name}} can parse the logs and has the corresponding scenarios.
For example, if you're processing [nginx](http://nginx.org) logs, you might want to install the [nginx collection](https://hub.crowdsec.net/author/crowdsecurity/collections/nginx).
A collection can be installed by typing `cscli install collection crowdsecurity/nginx`, and provides all the necessary parsers and scenarios to handle said log source. `systemctl reload crowdsec` to ensure the new scenarios are loaded.
In the same spirit, the [crowdsecurity/sshd](https://hub.crowdsec.net/author/crowdsecurity/collections/sshd)'s collection will fit most sshd setups !
While {{crowdsec.name}} is running, a quick look at [`cscli metrics`](/observability/command_line/) should help you ensure that your log sources are correctly parsed.
## List installed configurations
> List installed parsers/scenarios/collections/enricher
@ -74,6 +59,23 @@ INFO[0000] POSTOVERFLOWS:
</details>
## Finding configurations
{{crowdsec.Name}} efficiency is dictated by installed parsers and scenarios, so [take a look at the {{hub.name}}]({{hub.url}}) to find the appropriated ones !
If you didn't perform the setup with the wizard, or if you are reading logs from other machines, you will have to pick the right {{collections.htmlname}}. This will ensure that {{crowdsec.name}} can parse the logs and has the corresponding scenarios.
For example, if you're processing [nginx](http://nginx.org) logs, you might want to install the [nginx collection](https://hub.crowdsec.net/author/crowdsecurity/collections/nginx).
A collection can be installed by typing `cscli install collection crowdsecurity/nginx`, and provides all the necessary parsers and scenarios to handle said log source. `systemctl reload crowdsec` to ensure the new scenarios are loaded.
In the same spirit, the [crowdsecurity/sshd](https://hub.crowdsec.net/author/crowdsecurity/collections/sshd)'s collection will fit most sshd setups !
While {{crowdsec.name}} is running, a quick look at [`cscli metrics`](/observability/command_line/) should help you ensure that your log sources are correctly parsed.
## List existing bans
> List current bans

View file

@ -3,7 +3,7 @@ Enrichers are basically {{parsers.htmlname}} that can rely on external methods t
Enrichers functions should all accept a string as a parameter, and return an associative string array, that will be automatically merged into the `Enriched` map of the {{event.htmlname}}.
!!! warning
At the time of writing, enrichers plugin mechanism implementation is still ongoing (read: the list of available is currently hardcoded).
At the time of writing, enrichers plugin mechanism implementation is still ongoing (read: the list of available enrichment methods is currently hardcoded).
As an example let's look into the geoip-enrich parser/enricher :

View file

@ -50,7 +50,7 @@ INFO[0000] Loaded 9 collecs, 14 parsers, 12 scenarios, 1 post-overflow parsers
INFO[0000] crowdsec/nginx-logs : OK
INFO[0000] Enabled parsers : crowdsec/nginx-logs
INFO[0000] Enabled crowdsec/nginx-logs
# systemctl restart crowdsec
# systemctl reload crowdsec
```
### Your own parsers

View file

@ -50,7 +50,7 @@ INFO[0000] Loaded 9 collecs, 14 parsers, 12 scenarios, 1 post-overflow parsers
INFO[0000] crowdsec/ssh-bf : OK
INFO[0000] Enabled scenarios : crowdsec/ssh-bf
INFO[0000] Enabled crowdsec/ssh-bf
# systemctl restart crowdsec
# systemctl reload crowdsec
```
### Your own scenarios

View file

@ -1,14 +1,17 @@
`{{cli.bin}}` is the utility that will help you to manage {{crowdsec.name}}. This tools has the following functionalities:
- [manage bans]({{ cli.ban_doc }}) : list, add, remove ...
- [backup and restore]({{ cli.backup_doc }}) configuration
- [manage bans]({{ cli.ban_doc }})
- [backup and restore configuration]({{ cli.backup_doc }})
- [display metrics]({{ cli.metrics_doc }})
- [install]({{ cli.install_doc }}) parsers/scenarios/collections
- [remove]({{ cli.remove_doc }}) parsers/scenarios/collections
- [update]({{ cli.update_doc }}) the hub cache
- [upgrade]({{ cli.upgrade_doc }}) parsers/scenarios/collections
- [list]({{ cli.list_doc }}) parsers/scenarios/collections
- [install configurations]({{ cli.install_doc }})
- [remove configurations]({{ cli.remove_doc }})
- [update configurations]({{ cli.update_doc }})
- [upgrade configurations]({{ cli.upgrade_doc }})
- [list configurations]({{ cli.list_doc }})
- [interact with CrowdSec API]({{ cli.api_doc }})
- [manage simulation]({{cli.simulation_doc}})
Take a look at the [dedicated documentation]({{cli.main_doc}})
## Overview

View file

@ -35,18 +35,31 @@ Thanks to this, besides detecting and stopping attacks in real time based on you
All of those are represented as YAML files, that can be found, shared and kept up-to-date thanks to the {{hub.htmlname}}, or [easily hand-crafted](/write_configurations/scenarios/) to address specific needs.
## Main features
{{crowdsec.Name}}, besides the core "detect and react" mechanism, is committed to a few other key points :
- **Easy Installation** : The provided wizard allows a [trivial deployment](/getting_started/installation/#using-the-interactive-wizard) on most standard setups
- **Easy daily operations** : Using [cscli](/cscli/cscli_upgrade/) and the {{hub.htmlname}}, keeping your detection mechanisms up-to-date is trivial
- **Observability** : Providing strongs insights on what is going on and what {{crowdsec.name}} is doing :
- Humans have [access to a trivially deployable web interface](/observability/dashboard/)
- OPs have [access to detailed prometheus metrics](/observability/prometheus/)
- Admins have [a friendly command-line interface tool](/observability/command_line/)
## Moving forward
To learn more about {{crowdsec.name}} and give it a try, please see :
- [How to install {{crowdsec.name}}](/getting_started/installation/)
- [Take a quick tour of {{crowdsec.name}} and {{cli.name}} features](/getting_started/crowdsec-tour/)
- [Deploy {{blockers.name}} to stop malevolent peers](/blockers/)
- [Observability of {{crowdsec.name}}](/observability/overview/)
- [Understand {{crowdsec.name}} configuration](/getting_started/concepts/)
- [Deploy {{blockers.name}} to stop malevolent peers](/blockers/)
- [FAQ](getting_started/FAQ/)
If you have a functional {{crowdsec.name}} setup, you might want to find the right [{{blockers.name}}](/blockers/).
Don't hesitate to reach out if you're facing issues :
Don't hesitate to look at the [glossary](/getting_started/glossary/) for clarification !
- [report a bug](https://github.com/crowdsecurity/crowdsec/issues/new?assignees=&labels=bug&template=bug_report.md&title=Bug%2F)
- [suggest an improvement](https://github.com/crowdsecurity/crowdsec/issues/new?assignees=&labels=enhancement&template=feature_request.md&title=Improvment%2F)
- [ask for help on the forums](https://discourse.crowdsec.net)

View file

@ -50,6 +50,10 @@ Now you can connect to your dashboard, sign-in with your saved credentials then
![Dashboard_view](../assets/images/dashboard_view.png)
![Dashboard_view2](../assets/images/dashboard_view2.png)
Dashboard docker image can be managed by {{cli.name}} and docker cli also. Look at the {{cli.name}} help command using
```bash

View file

@ -3,20 +3,20 @@
Scenarios are YAML files that allow to detect and qualify a specific behavior, usually an attack.
Scenarios receive one or more {{event.htmlname}} and might produce one or more {{overflow.htmlname}}.
As an {{event.htmlname}} can be the representation of a log line, or an overflow, it allows scenarios to process both logs or overflows.
Scenarios receive {{event.htmlname}}(s) and can produce {{overflow.htmlname}}(s) using the [leaky bucket](https://en.wikipedia.org/wiki/Leaky_bucket) algorithm.
As an {{event.htmlname}} can be the representation of a log line, or an overflow, it allows scenarios to process both logs or overflows to allow inference.
The scenario is usually based on a number of factors, at least :
Scenarios can be of different types (leaky, trigger, counter), and are based on various factors, such as :
- the speed/frequency at which events happen (see [leaky bucket](https://en.wikipedia.org/wiki/Leaky_bucket))
- the characteristic(s) of an {{event.htmlname}} : "log type XX with field YY set to ZZ"
- the speed/frequency of the [leaky bucket](https://en.wikipedia.org/wiki/Leaky_bucket)
- the capacity of the [leaky bucket](https://en.wikipedia.org/wiki/Leaky_bucket)
- the characteristic(s) of eligible {{event.htmlname}}(s) : "log type XX with field YY set to ZZ"
- various filters/directives that can alter the bucket's behavior, such as [groupby](/references/scenarios/#groupby), [distinct](references/scenarios/#distinct) or [blackhole](/references/scenarios/#blackhole)
Behind the scenes, {{crowdsec.name}} is going to create one or more buckets when events with matching characteristics arrive to the scenario. When any of these buckets overflows, the scenario has been triggered.
Behind the scenes, {{crowdsec.name}} is going to create one or several buckets when event with matching characteristic arrive to the scenario. This bucket has a capacity and leak-speed, when the bucket "overflows", the scenario has been trigger.
_Bucket partitioning_ : One scenario usually leads to many bucket creation, as each bucket is only tracking a specific subset of events. For example, if we are tracking brute-force, it makes sense that each "offending peer" get its own bucket.
_Bucket partitioning_ : One scenario usually leads to many buckets creation, as each bucket is only tracking a specific subset of events. For example, if we are tracking brute-force, each "offending peer" get its own bucket.
A way to detect a http scanner might be to track the number of distinct non-existing pages it's requesting, and the scenario might look like this :

View file

@ -1,15 +1,15 @@
# Write the acquisition file (optional for test)
In order for your log to be processed by the good parser, it must match the filter that you will configure in your parser file.
There is two option:
There are two options:
- Your logs are wrote from a syslog server, so you just have to install the [syslog parser](https://master.d3padiiorjhf1k.amplifyapp.com/author/crowdsecurity/configurations/syslog-logs)
- You're log are read from a log file. Please add this kind of configuration in your `acquis.yaml` file:
- Your logs are written by a syslog server, so you just have to install the [syslog parser](https://master.d3padiiorjhf1k.amplifyapp.com/author/crowdsecurity/configurations/syslog-logs)
- Your logs are read from a log file. Please add this kind of configuration in your `acquis.yaml` file:
&#9432; the `type` is the one that the parser in `s01-parse` filter will need to match.
&#9432; the `type` will be matched by the parsers's `filter` in stage `s01-parse`.
```
```yaml
---
filename: <PATH_TO_YOUR_LOG_FILE>
labels:

View file

@ -1,15 +1,25 @@
# Expressions
> {{expr.htmlname}} : Expression evaluation engine for Go: fast, non-Turing complete, dynamic typing, static typing
> [antonmedv/expr](https://github.com/antonmedv/expr) - Expression evaluation engine for Go: fast, non-Turing complete, dynamic typing, static typing
Several places of {{crowdsec.name}}'s configuration use {{expr.htmlname}} :
Several places of {{crowdsec.name}}'s configuration use [expr](https://github.com/antonmedv/expr), notably :
- {{filter.Htmlname}} that are used to determine events eligibility in {{parsers.htmlname}} and {{scenarios.htmlname}} or `profiles`
- {{statics.Htmlname}} use expr in the `expression` directive, to compute complex values
- {{whitelists.Htmlname}} rely on `expression` directive to allow more complex whitelists filters
To learn more about {{expr.htmlname}}, [check the github page of the project](https://github.com/antonmedv/expr/blob/master/docs/Language-Definition.md).
To learn more about [expr](https://github.com/antonmedv/expr), [check the github page of the project](https://github.com/antonmedv/expr/blob/master/docs/Language-Definition.md).
When {{crowdsec.name}} relies on `expr`, a context is provided to let the expression access relevant objects :
- `evt.` is the representation of the current {{event.htmlname}} and is the most relevant object
- in [profiles](/references/output/#profile), {{signal.htmlname}} is accessible via the `sig.` object
If the `debug` is enabled (in the scenario or parser where expr is used), additional debug will be displayed regarding evaluated expressions.
# Helpers
In order to makes its use in {{crowdsec.name}} more efficient, we added a few helpers that are documented bellow.

View file

@ -62,30 +62,33 @@ INFO[0001] Enabled crowdsecurity/linux
## &#9432; Reminder
Logs parsing is divided into stage. Each stage are called in the format "sXX-<stage_name>". If a log success in a stage and is configured to go in `next_stage`, then the next stage will process it. Stage process is sorted alphabetically.
Logs parsing is divided into stage, and each stage can contain one or more parser. Stages are named using a "sXX-<stage_name>" convention, and are processed in the alphabetical order. When a log is successfully parsed by a node that is configured to go in `next_stage`, the event is forwarded to the next stage (and the remaining parsers of the current stage aren't parsed).
Stages and parsers are being processed alphabetically, thus the expected order would be :
```
s00-raw/syslog.yaml
s01-parse/nginx.yaml
s01-parse/apache.yaml
s01-parse/nginx.yaml
s02-enrich/geoip.yaml
s02-enrich/rdns.yaml
```
### Basics stage
### Default stages
- The first stage (`s00-parse`) is mostly the one that will parsed the begining of your log to say : "This log come from this `source`" where the source can be whatever software that produce logs.
If all your logs are sent to a syslog server, there is a [parser](https://master.d3padiiorjhf1k.amplifyapp.com/author/crowdsecurity/configurations/syslog-logs) that will parse the syslog header to detect the program source.
When the log is processed, the results (ie. capture groups) will be merged in the current {{event.htmlname}} before being sent to the next stage.
- The preliminary stage (`s00-raw`) is mostly the one that will parse the structure of the log. This is where [syslog-logs](https://hub.crowdsec.net/author/crowdsecurity/configurations/syslog-logs) are parsed for example. Such a parser will parse the syslog header to detect the program source.
- The second (`s01-parse`) is the one that will parse the logs and output parsed data and static assigned values. There is currently one parser for one type of software. To parse the logs, regexp or GROK pattern are used. If the parser is configured to go to the [`next_stage`](/references/parsers/#onsuccess), then it will be process by the `enrichment` stage.
- The main stage (`s01-parse`) is the one that will parse actual applications logs and output parsed data and static assigned values. There is one parser for each type of software. To parse the logs, regexp or GROK pattern are used. If the parser is configured to go to the [`next_stage`](/references/parsers/#onsuccess), then it will be process by the `enrichment` stage.
- The enrichment (`s02-enrich`) stage is the one that will enrich the normalized log (we call it an event now that it is normalized) in order to get more information for the heuristic process. This stage can be composed of grok patterns and so on, but as well of plugins that can be writen by the community (geiop enrichment, rdns ...) for example [geoip-enrich](https://hub.crowdsec.net/author/crowdsecurity/configurations/geoip-enrich).
You can now jump to the next step : [writing our own parser !](/write_configurations/parsers/)
- The enrichment `s02-enrich` stage is the one that will enrich the normalized log (we call it an event now that it is normalized) in order to get more information for the heuristic process. This stage can be composed of grok patterns and so on, but as well of plugins that can be writen by the community (geiop enrichment, rdns ...)
### Custom stage
Of course, it is possible to write custom stages. If you want some specific parsing or enrichment to be done after the `s02-enrich` stage, it is possible by creating a new folder `s03-<custom_stage>`. The configuration that will be created in this folder will process the logs configurated to go to `next_stage` in the `s02-enrich` stage. Be careful to write filter that will match incoming event in your custom stage.
It is possible to write custom stages. If you want some specific parsing or enrichment to be done after the `s02-enrich` stage, it is possible by creating a new folder `s03-<custom_stage>` (and so on). The configuration that will be created in this folder will process the logs configured to go to `next_stage` in the `s02-enrich` stage.

View file

@ -3,7 +3,7 @@
!!! info
Please ensure that you have working env or setup test environment before writing your scenario.
Ensure that your logs are properly parsed.
Ensure that [your logs are properly parsed](/write_configurations/parsers/).
Have some sample logs at hand reach to test your scenario as you progress.
@ -12,18 +12,20 @@
> This document aims at detailing the process of writing and testing new scenarios.
> If you're writing scenario for existing logs, [take a look at the taxonomy](https://hub.crowdsec.net/fields) to find your way !
## Base scenario file
The simple scenario can be defined as :
A rudimentary scenario can be defined as :
```yaml
type: leaky
debug: true
name: me/my-cool-scenario
description: "detect cool stuff"
filter: "1 == 1"
filter: evt.Meta.log_type == 'iptables_drop'
capacity: 1
leakspeed: 1m
blackhole: 1m
@ -31,7 +33,7 @@ labels:
type: my_test
```
- a {{filter.htmlname}} : if the expression is `true`, the event will enter the parser, otherwise, it won't
- a {{filter.htmlname}} : if the expression is `true`, the event will enter the scenario, otherwise, it won't
- a name & a description
- a capacity for our [Leaky Bucket](https://en.wikipedia.org/wiki/Leaky_bucket)
- a leak speed for our [Leaky Bucket](https://en.wikipedia.org/wiki/Leaky_bucket)
@ -63,22 +65,26 @@ May 12 09:40:16 sd-126005 kernel: [47678084.929208] IN=enp1s0 OUT= MAC=00:08:a2:
<details>
<summary>Expected output</summary>
```bash
DEBU[04-08-2020 10:44:26] eval(evt.Meta.log_type == 'iptables_drop') = TRUE cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario
DEBU[04-08-2020 10:44:26] eval variables: cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario
DEBU[04-08-2020 10:44:26] evt.Meta.log_type = 'iptables_drop' cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario
...
DEBU[2020-05-12T11:22:17+02:00] eval(TRUE) 1 == 1 cfg=snowy-dawn file=config/scenarios/mytest.yaml name=me/my-cool-scenario
DEBU[2020-05-12T11:22:17+02:00] Instanciating TimeMachine bucket cfg=snowy-dawn file=config/scenarios/mytest.yaml name=me/my-cool-scenario
DEBU[2020-05-12T11:22:17+02:00] Leaky routine starting, lifetime : 2m0s bucket_id=withered-brook capacity=1 cfg=snowy-dawn file=config/scenarios/mytest.yaml name=me/my-cool-scenario partition=f16d5033bebe7090fb626f5feb4e4073cee206d4
DEBU[2020-05-12T11:22:17+02:00] Pouring event bucket_id=withered-brook capacity=1 cfg=snowy-dawn file=config/scenarios/mytest.yaml name=me/my-cool-scenario partition=f16d5033bebe7090fb626f5feb4e4073cee206d4
DEBU[2020-05-12T11:22:17+02:00] First pour, creation timestamp : 2020-05-12 09:40:15 +0000 UTC bucket_id=withered-brook capacity=1 cfg=snowy-dawn file=config/scenarios/mytest.yaml name=me/my-cool-scenario partition=f16d5033bebe7090fb626f5feb4e4073cee206d4
DEBU[2020-05-12T11:22:17+02:00] eval(TRUE) 1 == 1 cfg=snowy-dawn file=config/scenarios/mytest.yaml name=me/my-cool-scenario
DEBU[2020-05-12T11:22:17+02:00] Pouring event bucket_id=withered-brook capacity=1 cfg=snowy-dawn file=config/scenarios/mytest.yaml name=me/my-cool-scenario partition=f16d5033bebe7090fb626f5feb4e4073cee206d4
DEBU[2020-05-12T11:22:17+02:00] Bucket overflow at 2020-05-12 09:40:15 +0000 UTC bucket_id=withered-brook capacity=1 cfg=snowy-dawn file=config/scenarios/mytest.yaml name=me/my-cool-scenario partition=f16d5033bebe7090fb626f5feb4e4073cee206d4
DEBU[2020-05-12T11:22:17+02:00] Overflow, bucket start: 2020-05-12 09:40:15 +0000 UTC, bucket end : 2020-05-12 09:40:15 +0000 UTC bucket_id=withered-brook capacity=1 cfg=snowy-dawn file=config/scenarios/mytest.yaml name=me/my-cool-scenario partition=f16d5033bebe7090fb626f5feb4e4073cee206d4
DEBU[2020-05-12T11:22:17+02:00] Adding blackhole (until: 2020-05-12 09:41:15 +0000 UTC) bucket_id=withered-brook capacity=1 cfg=snowy-dawn file=config/scenarios/mytest.yaml name=me/my-cool-scenario partition=f16d5033bebe7090fb626f5feb4e4073cee206d4
DEBU[2020-05-12T11:22:17+02:00] Leaky routine exit bucket_id=withered-brook capacity=1 cfg=snowy-dawn file=config/scenarios/mytest.yaml name=me/my-cool-scenario partition=f16d5033bebe7090fb626f5feb4e4073cee206d4
DEBU[2020-05-12T11:22:17+02:00] eval(TRUE) 1 == 1 cfg=snowy-dawn file=config/scenarios/mytest.yaml name=me/my-cool-scenario
INFO[12-05-2020 11:22:17] node warning : no remediation bucket_id=withered-brook event_time="2020-05-12 09:40:15 +0000 UTC" scenario=me/my-cool-scenario source_ip=66.66.66.66
DEBU[2020-05-12T11:22:17+02:00] Bucket f16d5033bebe7090fb626f5feb4e4073cee206d4 dead/expired, cleanup bucket_id=withered-brook capacity=1 cfg=snowy-dawn file=config/scenarios/mytest.yaml name=me/my-cool-scenario partition=f16d5033bebe7090fb626f5feb4e4073cee206d4
INFO[12-05-2020 11:22:17] Processing Overflow with no decisions 2 IPs performed 'me/my-cool-scenario' (2 events over 0s) at 2020-05-12 09:40:15 +0000 UTC bucket_id=withered-brook event_time="2020-05-12 09:40:15 +0000 UTC" scenario=me/my-cool-scenario source_ip=66.66.66.66
DEBU[04-08-2020 10:44:26] eval(evt.Meta.log_type == 'iptables_drop') = TRUE cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario
DEBU[04-08-2020 10:44:26] eval variables: cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario
DEBU[04-08-2020 10:44:26] evt.Meta.log_type = 'iptables_drop' cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario
...
DEBU[04-08-2020 10:44:26] Overflow (start: 2020-05-12 09:40:15 +0000 UTC, end: 2020-05-12 09:40:15 +0000 UTC) bucket_id=sparkling-thunder capacity=1 cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario partition=ea2fed6bf8bb70d462ef8acacc4c96f5f8754413
DEBU[04-08-2020 10:44:26] Adding overflow to blackhole (2020-05-12 09:40:15 +0000 UTC) bucket_id=sparkling-thunder capacity=1 cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario partition=ea2fed6bf8bb70d462ef8acacc4c96f5f8754413
DEBU[04-08-2020 10:44:26] eval(evt.Meta.log_type == 'iptables_drop') = TRUE cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario
DEBU[04-08-2020 10:44:26] eval variables: cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario
DEBU[04-08-2020 10:44:26] evt.Meta.log_type = 'iptables_drop' cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario
DEBU[04-08-2020 10:44:26] Bucket ea2fed6bf8bb70d462ef8acacc4c96f5f8754413 found dead, cleanup the body bucket_id=sparkling-thunder capacity=1 cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario partition=ea2fed6bf8bb70d462ef8acacc4c96f5f8754413
WARN[04-08-2020 10:44:26] read 4 lines file=./x.log
...
INFO[04-08-2020 10:44:26] Processing Overflow with no decisions 2 IPs performed 'me/my-cool-scenario' (2 events over 0s) at 2020-05-12 09:40:15 +0000 UTC bucket_id=sparkling-thunder event_time="2020-05-12 09:40:15 +0000 UTC" scenario=me/my-cool-scenario source_ip=66.66.66.66
...
DEBU[04-08-2020 10:44:26] Overflow discarded, still blackholed for 59s bucket_id=long-pine capacity=1 cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario partition=ea2fed6bf8bb70d462ef8acacc4c96f5f8754413
DEBU[04-08-2020 10:44:26] Overflow has been discard (*leakybucket.Blackhole) bucket_id=long-pine capacity=1 cfg=shy-dust file=config/scenarios/iptables-scan.yaml name=me/my-cool-scenario partition=ea2fed6bf8bb70d462ef8acacc4c96f5f8754413
...
```
</details>
@ -88,7 +94,7 @@ We can see our "mock" scenario is working, let's see what happened :
- The first event (parsed line) is processed :
- The `filter` returned true (`1 == 1`) so the {{event.htmlname}} will be processed by our bucket
- The `filter` returned true (`evt.Meta.log_type == 'iptables_drop'`) so the {{event.htmlname}} will be processed by our bucket
- The bucket is instantiated in {{timeMachine.htmlname}} mode, and its creation date is set to the timestamp from the first log
- The {{event.htmlname}} is poured in the actual bucket
@ -268,7 +274,7 @@ It seems to work correctly !
## Hold my beer and watch this
One I have acquire confidence in my scenario and I want it to trigger some bans, we can simply add :
Once I have acquire confidence in my scenario and I want it to trigger some bans, we can simply add :
```yaml
@ -295,7 +301,7 @@ Adding `remediation: true` into the labels tells {{crowdsec.name}} that we shoul
Let's try :
- I copied the yaml file to a production system (`/etc/crowdsec/crowdsec/scenarios/mytest.yaml`)
- I restart {{crowdsec.name}} (`systemctl restart crowdsec`)
- I restart {{crowdsec.name}} (`systemctl reload crowdsec`)
Let's check if it seems correctly enabled :
@ -338,108 +344,3 @@ INFO[0000] backend plugin 'database' loaded
```
It worked !!!
<!--
# Writing Crowdsec scenario
> Please refer to []() if the parser doesn't exist
## Acquiring the logs
First step to test a scenario is to get the logs that trigger your wanted scenario.
## Write the yaml configuration file
The first configuration for a scenario is the [type]().
It can uniq, trigger or leaky. Please [see here]() for more description about scenarios type.
```yaml
type: leaky
```
Then the `name`. Please write it in the form of `<github_account_name>/<scenario_name>` .
```yaml
name: crowdsecurity/http-scan-uniques_404
```
Describe in one sentence what this scenario trigger :
```yaml
description: Detect multiple unique 404 from a single ip
```
Now come the filter, the one that will decide if the incoming log will be store in our bucket.
For the HTTP 404 scans, we want only logs from web request that return a `404`, `403` or `400` HTTP code:
```yaml
filter: "evt.Meta.service == 'http' && evt.Meta.http_status in ['404', '403', '400']"
```
Then we want to groupby this scenario for `source_ip` (ip of the attacker) :
```yaml
groupby: evt.Meta.source_ip
```
And we want only one log to be store for each HTTP URI query by the attacker (no duplicate):
```yaml
distinct: evt.Meta.http_path
```
We can say that if someone query 5 or more unknown or forbidden HTTP URI with a leakspeed of 10s, then it is probably a web scan:
```yaml
capacity: 5
leakspeed: "10s"
```
Then we want to blackhole it for 5minutes (because if the attacker crawl a lot, we don't want to be notofied for every 5 times HTTP 404/403/400):
```yaml
blackhole: 5m
```
We also want to reprocess this scenarios if it happen for more heuristic:
```yaml
reprocess: true
```
And then we give some labels to this scenario for remediation:
```yaml
service: http
type: scan
remediation: true
```
Full example scenario:
<details>
<summary>Nginx </summary>
```yaml
# 404 scan
type: leaky
#debug: true
name: crowdsecurity/http-scan-uniques_404
description: "Detect multiple unique 404 from a single ip"
filter: "evt.Meta.service == 'http' && evt.Meta.http_status in ['404', '403', '400']"
groupby: "evt.Meta.source_ip"
distinct: "evt.Meta.http_path"
capacity: 5
#debug: true
reprocess: true
leakspeed: "10s"
blackhole: 5m
labels:
service: http
type: scan
remediation: true
```
</details> -->

View file

@ -86,7 +86,7 @@ whitelist:
- "80.x.x.x"
```
and reload {{crowdsec.name}} : `sudo systemctl restart crowdsec`
and reload {{crowdsec.name}} : `sudo systemctl reload crowdsec`
### Test the whitelist

View file

@ -134,6 +134,7 @@ extra:
update_doc: /cscli/cscli_update/
upgrade_doc: /cscli/cscli_upgrade/
backup_doc: /cscli/cscli_backup/
simulation_doc: /cscli/cscli_simulation/
config:
cli_dir: /etc/crowdsec/cscli/
crowdsec_dir: "/etc/crowdsec/config/"
@ -174,8 +175,8 @@ extra:
expr:
name: expr
Name: Expr
htmlname: "[expr](https://github.com/antonmedv/expr)"
Htmlname: "[Expr](https://github.com/antonmedv/expr)"
htmlname: "[expr](/write_configurations/expressions/)"
Htmlname: "[Expr](/write_configurations/expressions/)"
stages:
name: stages
name: Stages