From version 11.1
edited by David Nestle
on 2019/03/04 19:17
To version 12.1
edited by David Nestle
on 2019/03/04 21:08
Change comment: There is no comment for this version



Page properties
... ... @@ -24,14 +24,6 @@
24 24  
25 25  = Evaluation Processes =
26 26  
27 -=== Setting up an evaluation server ===
28 -
29 -TODO
30 -
31 -=== Adding a new evaluation to an evaluation server ===
32 -
33 -TODO
34 -
35 35  === Adding a new KPI page for an evaluation ===
36 36  
37 37  Usually when a new evaluation has been developed and shall be used on an evaluation server a KPI overview page needs to be provided. The following steps and considerations can be used to implement this:
... ... @@ -38,11 +38,56 @@
38 38  
39 39  * One evaluation provider must implement //GaRoSingleEvalProvider.getPageDefinitionsOffered()// and provider the page definition there. This does not necessarily be the EvaluationProvider itself for which the page shall be provided. Pages can incorporate results from different EvaluationProviders and the provider declaring the page in //getPageDefinitionsOffered// can be a completely different EvaluationProvider. In some cases a specific EvaluationProvider for a project declares all KPI pages for the project.
40 40  * When an EvaluationProvider implements //getPageDefinitionsOffered// the respective start page in EvaluationOfflineControl will have a button in the bottom right area "Add KPI-pages offered by provider". When the button is pressed the respective pages are created and updated. See also [[Evaluation Offline Control page>>doc:Tutorial Collection.SDK Tutorial Overview Experimental.Evaluation Offline Control App.WebHome]] regarding this topic.
33 +* When new evaluation results are added to an EvaluationProvider usually also KPI pages have to be adapted - or sometimes an additional page needs to be defined when too many KPIs would be too much for a single page.
34 +* Configured pages are stored in the ResourceList //offlineEvaluationControlConfig/kpiPageConfigs// (resources of type //KPIPageConfig//). If a page shall be removed the respective entry has to be deleted (e.g. using the ResourceManipulator app) and EvaluationOfflineControl app has to be restarted.
41 41  
42 42  === Adding a new email report / alarm for an evaluation ===
43 43  
44 44  TODO
45 45  
40 +=== Adding evaluation results and other calculations to an existing EvaluationProvider ===
41 +
42 +There are several components typically used to implement EvaluationProviders:
43 +
44 +* The widget timeseries evaluation API fosters the development of evaluation modules that do not store intermediate result time series. For real online evaluation this is usually not possible anyways as the next step would have to wait until the intermediate timeseries is calculated completely. Instead of applying small generic evaluation methods to entire time series there are several generic utility classes that can be used to perform certain standard tasks such as calculation of mean, standard deviation and median/quantiles within an EvaluationProvider online on the incoming data. The collection of such standard modules is provided in [[online/utils>>url:]]
45 +* [[GenericGaRoSingleEvalProvider>>url:]]: Standard abstract class for the implemenation of simple GaRo-EvaluationProviders. You have to adpt ID, LABEL, DESCRIPTION, define getGaRoInputTypes and RESULTS with the respective result definitions. Usually the core logic is implemented in the sub-class EvalCore (constructor and method //processValue//). See Result Levels for more details how/where to implement the evaluation logic and results.
46 +* [[GenericGaRoSingleEvalProviderPreEval>>url:]]: Standard abstract class for the implementation of GaRo-EvaluationProviders requesting the results from other evaluations (transfer of results via JSON file).
47 +* When providing a MultiResult class extending [[GaRoMultiResultExtended>>url:]] you can also generate 'overallResults' meaning results that depend on more than one room or timer period. See 'Using GaRoMultiResultExtended' below for more details on this.
48 +* (((
49 +Using GaRoMultiResultExtended: You can use git\fhg-alliance-internal\src\widgets\timeseries-tools\timeseries-heating-analysis-multi\src\main\java\de\iwes\timeseries\provider\genericcollection\ and ComfortTempRB_OverallMultiResult as examples:
50 +
51 +*
52 +
53 +If you want to provide new values that are a result of an entire MultiEvaluation (all rooms, gateways and timesteps) you usually define additional members of the class extening GaRoMultiResultExtended as in OutsideTempGenericMultiResult. You have to make sure you get the right results into JSON (all public members and public methods starting on 'get' are exported).
54 +
55 +*
56 +
57 +If you want to provider per-timestep results typically a new TimeSeries should be created as a member to the class extending GaRoMultiResultExtended as in OutsideTempGenericMultiResult.
58 +
59 +*
60 +
61 +If you want to provide per-gateway results typically an "overall room" is created in RoomData and added to the map of results, so no additional members are required in the class (see ComfortTempRB_OverallMultiResul).
62 +)))
63 +* (((
64 +...
65 +)))
66 +
67 +General considerations:
68 +
69 +* Add additional results/calculations
70 +* Test with manual evaluation
71 +* Adapt Auto-Evaluation
72 +* Adapt KPI result page(s) and email / alarm report definitions (see above)
73 +
74 +=== Adding a new evaluation to an evaluation server ===
75 +
76 +Usually an existing evaluation is used as a template. Initially a new evaluation provider should be implemented as simple as possible together with an initial evaluation page. Features can be added as described above.
77 +
78 +(% class="wikigeneratedid" %)
79 +=== Setting up an evaluation server ===
80 +
81 +Usually a new evaluation server can be set up using an existing evaluation server rundir and configuration as template.
82 +
46 46  === Evaluation of Gateway's state by collected messages ===
47 47  
48 48  Unexpected errors could be occurred in operating gateways and server. The alarm messages wake us up to react to errors within a day after problematic situation in terms of everyday monitoring. If you proceed this in long term, then you will have amounts of messages. Collecting the alarming messages and evaluating these in a given term, you can have a feedback, which gateway has fundamentally problem in there functionality and how fast we reacted them. Furthermore it would raise the reliability at analyzing the results of important values of the competition. For the statistic evaluation you don’t need any other software skills. It can be easily done in Microsoft-Excel.