best-practices-in-threat-in.../best-practices/improving-analysis.adoc

30 lines
3.5 KiB
Plaintext

=== Improving Analysis
NOTE: Improvement of the analysis process can range from a simple notification of a false-positive or the correction of a typographic error, all the way up to a complete competitive or counter analysis of the original analysis.
A common difficulty in threat intelligence is to improve existing analyses and especially how to do it efficiently.
One of the main questions to ask is:
**"What will be the target audience of the improved analysis and the objective thereof?**"
The following three answers could come to mind.
. Informing the original analyst/author (e.g. a security vendor or a CSIRT) about a specific mistake or error which needs to be corrected.
. Improving an existing analysis by performing a complementary analysis or review which will be shared to and used by another group (e.g. a specific constituent, or a team within your organisation or a member of an ISAC, etc).
. The end-consumer will be an automaton.
In the **1st** case, <<MISP>> includes a mechanism to propose changes to the original creator, a mechanism MISP refers to as proposals. By using proposals, you can propose a change to the value or the context of an attribute (such as a typographic error in an IP address, missing contextual information, type of the information, the category or the removal of an IDS flag). The proposal will be sent back to the original author who can decide to accept or discard it.
The advantages of using the proposal system include the lack of a need to create a new event as well as the process itself being very simple and fast. However, it assumes that the party providing the improvements is willing to lose control over the proposed data. This is pretty efficient for small changes but for more comprehensive changes, especially those that include non-attribute information such as galaxy clusters or objects, the event extension is more appropriate.
Apart from being more suitable for more comprehensive changes, the **2nd** scenario is also a great fit for the extended event functionality, allowing users wanting to provide additional information or an alternate view-point with the opportunity of creating a self-contained event (which can have its own custom distribution rules) that references the original analysis. This information can be shared back to the original author or kept within a limited distribution scope such as a specific sector, a trust group or as internal information for the organisation providing the additional information.
TIP: For more information about the extended event functionality in MISP, the blog post *http://www.misp-project.org/2018/04/19/Extended-Events-Feature.html[Introducing The New Extended Events Feature in MISP]* includes a lot of details.
In the **3rd** scenario your use-case might be highly automated, e.g. scripted processing of events and attributes via https://github.com/MISP/PyMISP[PyMISP] and the end-consumer is mainly another automated process, e.g. Intrusion Detection System, 3rd part visualization tool etc.
This, for automagic reasons, becomes exponentially unreliable.
What is primal in this case is to fully understand what the <<IDS>> flag in MISP does and how it impacts attributes.
Further on, it is even more important to fully understand the entire tool-chain, cradle-to-grave style.
Where does the data come from (cradle) where does it go to (grave) and what processes "touch" the data as it flows through, small diagrams can help tremendously to visualize the actual data-flow.
Those diagrams will mostly be of use once unexpected results occur, or other errors appear somewhere in the chain.