Skip to main content

Posts

Azure Data Lake and Replication mechanism cross Azure Regions

Recent posts

[Post-Event Event] Event Sourcing and CQRS | ITCamp Community Summer Lunch Event in Cluj

Today we had the first ITCamp Community Event during the lunch break. We decided to do this event at this time in day because it was the only available slot for our special guest Andrea Saltarello.
The talk was about CQRS and Event Sourcing and even if it was only for one hour, the session contained a lot of takeaways, not only from technical perspective, but also from costs and architecture point of view. A great comparison between different NoSQL and ESB systems was presented from Event Sourcing point of view.

There were almost 30 people that decided to transform their lunch to a geek lunch together with ITCamp Community. This event was possible with the support or our local sponsors, that made this event possible.


Below you can find pictures from the event. See you next time!




Azure Audit Logs and Retention Policies

Scope In today post we will talk about Azure Audit Logs and retention policies. Because retention policies might differ from one industry to another, different approaches are required.
Audit Logs From my past experience, I know that each company and department might understand a different thing when you say Audit Logs. I was involved in projects where when you tag a log as audit you would be required by law to keep the audit log for 20-25 years. In this context, I think that the first step for us is to define what is an Audit Log in Azure. In Azure, most of the audit logs can be an activity log or a deployment operation. The first one is close related to any write operation that happened on your Azure Resource (post, put, delete). Read operations are not considered activity logs – but don’t be disappointed, there are many Azure Services that provided monitoring mechanism for read operation also (for example Azure Storage). The second type of audits are the one generated during a dep…

[Community Event] Event Sourcing and CQRS | ITCamp Community Summer Lunch Event in Cluj

At the end of this month (July 24) we will have a special guest in Cluj-Napoca: Andrea Saltarello. The format of the event will be different from the previous ones. The event will take place during the lunch break at The Office and is free.
If you want to find more about the event you can check the following registration links. See you at the event.

Meetup: https://www.meetup.com/ITCamp-Community/events/241394189/
Eventbrite: https://www.eventbrite.com/e/event-sourcing-and-cqrs-itcamp-community-summer-lunch-event-in-cluj-tickets-35994003032
ITCamp Community blog: https://community.itcamp.ro/2017/07/itcamp-community-summer-lunch-event-cluj-event-sourcing-cqrs/

Official announcement:
Let's try a different kind of event this summer. I proposed to all of you to meet during the lunch break and have a talk about Event Sourcing and CQRS. There will be a special guest (Andrea Saltarello - Solution Architect at Managed Design) that will talk about his own experience on how we should manage …

Near real-time analytics for IoT Technician from the field - Azure Time Series Insights

Take a look around you and tell me if you see at least one smart devices capable to send data. There are big chances that you'll have around you more than one. In this moment I have around me a laptop, a Surface, my Fitbit, a SmartTV and a Raspberry PI that fully equipped with weather sensors.
You might say who cares about the data that are collected from them. Maybe nobody or just adds companies. If you would be on a production lines things would be different, you would like to be able to visualize this data from different perspective, analyze them and find way the production fluctuated in a specific day.

Timebound
Data that is collected from different sensors and devices can contain a lot of parameters like temperature, humidity, light and noise level. But in the end when we want to visualize this data, the time information will be the one that will be used on a chart to look at data.
Try to imagine a chart where you put only temperature and humidity information, excluding the ti…

TOGAF® 9 Certification - Architecture Resources for exam preparation

In the last 3 weeks I wasn't active anymore on my blog. This happened because I decided to certified myself as TOGAF 9.

What is TOGAF?
TOGAF is a an architecture framework (Open Group Architecture Framework) for enterprise architectures. The framework comes with support for designing, planning, implement, governance and support an enterprise information technology architecture.
The core of this framework is TOGAF ADM (Architecture Development Method) that describes the method for developing and managing the full lifecycle of an enterprise architecture.

Why TOGAF?
On the market we can find a lot of certificates and standards related to this subject. I decided to go with TOGAF because is one of the frameworks that stays on the foundation of any company when you talk about enterprise architecture.
Additional to this, it is well used in bank, healthcare and life science industry. In comparison with other certificates, you cannot take this exam from your own laptop. It is requested to go…

Part 2 - Overengineering of a cloud application

In the last post we looked over a cloud solution design to ingest small CSV files uploaded by users. This files were crunched by the system that would generate static reports based on the content. Nothing fancy or complex.
The NFR requirements are light, because the real business value stays in the generated reports:

Under 200 users worldwide Concurrency level is 10% (20 users online simultan) Less than 15 CSV uploaded in total per day Basic reporting functionality Current DB size 150MB (2M reporting entries) DB forecast for next 3 years is 1GB (20-25M reporting entries) CSV has up to 1000 entries (maximum 10 columns) The system that was design for this application was a state of the art system - salable, robust, containing all the current technology trends. But of course was over engineering, to powerful and to expensive. Now, the biggest concern was how we can reduce the running cost of the system with a minimal impact (development cost). One of the drivers was that we had to come up…