diff --git a/fi1/README.md b/fi1/README.md index a700a73..a4b88af 100644 --- a/fi1/README.md +++ b/fi1/README.md @@ -1,6 +1,6 @@ # This exercise is to model a pensions system. -## original text from the client (named ACME) +## Original text from the client (named ACME) ACME operates a Master Trust Pension. A ACME pension consist of units of Assets held on behalf of the member. These units, and any associated cash, are recorded in an internal ledger. The actual units of each asset are aggregated into a single holding held by an external Asset Custodian on behalf of ACME. Asset units are bought and sold on behalf of members. The process of buying and selling assets takes several days trades are only placed periodically and take several days to settle. @@ -43,8 +43,37 @@ Consider the description above in an event driven platform with information stor Assuming the platform is implemented in Go, what patterns would you use to enable large numbers (c200-300) processes to be produced in a consistent and repeatable manner by multiple teams? -## interpretation of the text +## Interpretation of the text of the exercise. Assumptions made. -## assumptions +Some notes on the interpretation of the text and questions to ask. +**Main points to address are marked in bold** + +* There are a lot of different business processes (200-300 as per the text) + * What is a process ? + * How unique are those processes ? Do they share any common parts ? + * The original text describes a "process for buying assets" + * Is it a typical process ? + * Is it comprised of smaller processes ? This could explain the 200-300 processes if so. + * They need to be managed by different teams across the company + * Would be great to allow for each team to work independently and yet share any improvements. + **How to allow for code and pattern reuse while maintaining team autonomy** + * Types of processes + * The text mentions that some processes are automated, **I assume it means that any process can have automated and manual components both.** + * Mentions of some processes needing to process large amoutns of data + * Let's try to estimate + * Is keeping historical data (e.g. past trades) in hot storage (quickly accessible) important ? + * **I assume historical data older than N months can be moved to cold storage.** +* "Data must be 100% accurate" + * What is "accuracy" in this statement ? + * Does it mean consistency and durability ? + * **Losing data is assumed very bad** + * Once something has entered a system from a third party, e.g. an order to buy or sell it need to be persisted durably. **We can only reply to the initial call from the third party once we can guarantee durability.** + * **I assume each part of the system can be at different phases of processing data, but as long as the third parties view a consistent state of transactions it is allowed.** + * Is the granularity here per-customer ? + * I think the text mentions that all buy and sell transactions are batched up to be executed together. + * Is it important for these batches to be externally observable ? + * **Can a customer or a regulator demand proof that a transaction was processed at roughly the same time as others, in the same batch ?** +* Not a real-time system + * **I assume we care more about total system throughput than about latency of individual steps of processing** ## proposals