WEBVTT

00:00.000 --> 00:10.760
Okay, hi, I'm Nikolai, how many of you use CICOS already?

00:10.760 --> 00:13.760
I think quite a few people, nice.

00:13.760 --> 00:21.000
Yes, my talk is about building on top of CICOS.

00:21.000 --> 00:27.400
Couple of words about me, I'm Senior Software Engineer based in Helsinki, Finland, and

00:27.400 --> 00:33.280
past seven years, I used go for writing in production in Zalanda.

00:33.280 --> 00:40.280
I offer a couple of projects, one of them was presented here at first in Lightning Talks,

00:40.280 --> 00:48.320
PGX Outbox, and the second one I present today is CICOS C++, which is a playful name,

00:48.320 --> 00:52.920
because I used to do some C in the past.

00:52.920 --> 00:59.760
So the goal of this talk, the presentation, is about to create repositories on top of CICOS

00:59.760 --> 01:01.680
Cgenerated queries.

01:01.680 --> 01:07.920
So I'm trying to keep the strength of CICOS, type safety and compile time checks, but at the

01:07.920 --> 01:14.240
same time, I don't want those generated records and queries to leak into a business logic

01:14.240 --> 01:16.240
of my application.

01:16.240 --> 01:23.560
So some kind of mapping would be required between generated structs and my domain models

01:23.560 --> 01:26.960
and I tried to automate that one with AI.

01:26.960 --> 01:30.400
So it's a second talk about AI today.

01:30.400 --> 01:35.800
And what I mean by repository, it accepts and returns the main models.

01:35.800 --> 01:43.440
I tied some CICOS and schema and query details, and it does the mapping if needed between

01:43.440 --> 01:50.560
the main records and database records, which in our case is CICOS Cgenerated structs.

01:50.560 --> 01:56.120
It can also do some transaction orchestration between multiple operations.

01:56.120 --> 02:00.280
Yeah, let's introduce the main model.

02:00.280 --> 02:06.720
So in Zalanda, it's about e-commerce, so I'll introduce a cart, where a user can add items

02:06.720 --> 02:13.920
and then get the cart, see the items which are added, and then delete items from the cart.

02:13.920 --> 02:18.600
So that's like a simple, our domain model cart.

02:18.600 --> 02:21.440
So it can look like this, also quite simple.

02:21.440 --> 02:25.840
There is an order for the cart, there are multiple items, and then each item has an

02:25.840 --> 02:33.480
ID and a price created at times time, and so on.

02:33.560 --> 02:37.560
That's how it can be stored in CICOS.

02:37.560 --> 02:44.520
Yeah, quite straightforward, but it's a bit still different, because in cart we have

02:44.520 --> 02:52.920
owner ID, but here we have owner ID in each cart item, so there is some, it's a bit

02:52.920 --> 02:59.320
easy mismatch, but it's already some kind of mismatch we should try to address.

02:59.320 --> 03:05.240
Yes, and now I would like to do some review of how we used to work with positives from

03:05.240 --> 03:06.240
raw.

03:06.240 --> 03:12.400
So when I joined the company since years ago, and in the same project we used to use a

03:12.400 --> 03:19.040
standard API for talking to CICOS, and then at some point of time we started to

03:19.040 --> 03:27.040
do a squirrel to build our queries in more dynamic way, and we started to use CICOS

03:27.040 --> 03:34.040
to do the mapping to simplify the mapping scanning from the database results.

03:34.040 --> 03:40.040
Then at some point of time we started to migrate to PGX driver and then to the API of

03:40.040 --> 03:48.360
that, and now maybe a year ago we started to use CICOS and kind of a CICOS

03:48.360 --> 03:56.280
and then to roll out our services, and our RMS out of scope for this talk.

03:56.280 --> 04:03.400
So it started with standard driver API and the driver was LIPICU, because seven years ago

04:03.400 --> 04:08.960
and we didn't use query builders that much, and we used like many of scanning, and this

04:08.960 --> 04:15.840
is how it could look like in the code, someone still using that approach in production,

04:15.840 --> 04:24.760
okay, some people, and then once we started to adopt a squirrel, we kept driver the same

04:24.760 --> 04:28.240
and CICOS X, and then the code could look like this.

04:28.240 --> 04:34.680
So squirrel, it provides the main start shot language to build the query, so it's not

04:34.680 --> 04:42.480
CICOS anymore, but it's a goal which represents the CICOS, and then CICOS X, it's simplified

04:42.480 --> 04:49.600
how we can do this tenant, but on the other hand we had to introduce additional structs,

04:49.600 --> 04:55.840
which has DBTex and DBTex, and then at the end we would do the mapping back to the main

04:55.840 --> 05:00.160
model, because we wanted to keep the main model the same.

05:00.160 --> 05:08.920
And then we started to adopt PGX, which is more modern driver, yeah, we still kept a squirrel

05:08.920 --> 05:16.920
for building the queries, but we started to use PGX, collect rows, function to do this

05:16.920 --> 05:23.360
scanning, and then can be done directly into, I can be done directly into the main model,

05:23.360 --> 05:31.600
it can simplify this scanning, yeah, so it counts, so if we migrate to API, if PGX, which

05:31.600 --> 05:38.600
is not a standard library API anymore, then we can use some additional features supported

05:38.600 --> 05:45.280
for Postgres, like Batching and Listen Notify, and Prepare Statesments and so on, it does

05:45.280 --> 05:52.120
much better type handling, so it has more types which it can handle by default, and it has

05:52.120 --> 06:00.120
advanced connection pool, and now comes CICOS X and let's see how it can help us, so in the

06:00.120 --> 06:06.840
nutshell CICOS X, it's a static generator, so it's not, it's a deterministic generator

06:06.840 --> 06:13.320
which would read our schema, our migrations, use it to read the queries, which we define

06:13.320 --> 06:19.960
separately as CICOS, almost CICOS, in a separate file, and it reads our config, and then

06:19.960 --> 06:27.720
it generates go queries, and it generates structs, which corresponds, which helps queries to

06:27.720 --> 06:35.320
be executed, and then it use one-to-one table structs for our tables, which I don't usually

06:35.400 --> 06:45.680
use at all, I use mostly go queries and structs, so if we adopt CICOS X, then we start to use

06:45.680 --> 06:52.560
like pure CICOS, so it's like more CICOS first approach, not OREM, not building that as DSL,

06:52.560 --> 07:01.920
but we use almost pure CICOS, yes, and struct mapping is automatic now, but it's automatic

07:01.920 --> 07:07.360
to generate a track, so if we want to use the main models and now business logic we still

07:07.360 --> 07:15.840
need to do the mapping, so this is how it could, could look like, so here is the query, which

07:15.840 --> 07:22.400
I define in one of the CICOS files, and after generation, this is the go code, but I just

07:22.400 --> 07:31.520
combine them together here in this file, so now we have like a building block, this get card,

07:31.520 --> 07:38.240
it's a query in go, in go, which we can use as a building block in other places, in

07:38.240 --> 07:49.440
repository layer, we can use that one, so what's the process CICOS, one of the main advantage

07:49.440 --> 07:57.520
that's it does compile time, it can do the type safety, and it happens in compile time,

07:57.520 --> 08:04.800
so it will be able to resolve the types of what we have in the database and accordingly

08:04.800 --> 08:14.480
generate our strats, how it can do that, because under the hooded uses partial library from

08:14.560 --> 08:23.600
post this itself, it's a before it used to be CICOS, but now they get rid of CICOS by my

08:23.600 --> 08:33.360
guys and to this so I very go PG query, so and at you at leverages on app 163, which was

08:33.360 --> 08:40.800
mentioned in one of the previous talks, so it also generates a lot of boilerplate code, which we

08:40.880 --> 08:48.960
had to write previously before, and it does separation of CICOS and go code, so what is simply,

08:48.960 --> 08:56.080
apps like simple apps like syntax 3, it's shown here, although it's a simplified version,

08:56.080 --> 09:02.240
but it's not like just a text, it's CICOS query, but it's more like what's the semantic meaning

09:02.240 --> 09:09.840
of that query, so it leverages on that, so under the hood what CICOS it does, it treats all

09:09.920 --> 09:18.480
migrations and for each migration, it's builds on app 163, then it nudges them into CICOS C and

09:18.480 --> 09:25.680
turn on catalog, and once it's wants to generate the query, it will to look up what's other types

09:25.680 --> 09:34.720
in the catalog, so that's why it knows what types of return data we get, and then it can

09:34.720 --> 09:42.320
do the safe mapping into go types, so there are certain challenges, if we start to use CICOS,

09:42.880 --> 09:50.720
one of them that is kind of easy to leak generated records in our business logic, so to avoid that,

09:50.720 --> 09:57.440
we need to do some mapping between records and the main models, so also many people would ask how

09:57.520 --> 10:04.080
you would do with dynamic conditional queries, because now CICOS query is a static, I can

10:04.640 --> 10:11.440
in the scope of this talk, I can say that it's solvable, but I don't have much time to go into that,

10:11.440 --> 10:18.240
so about repository, as I mentioned before, repository should idea that it should work on

10:18.240 --> 10:25.120
it with the main models, not the generated tracks, so in the simple case, we need to do some

10:25.280 --> 10:32.480
delegation from the repository method to generate a query method by CICOS, and then do the

10:32.480 --> 10:42.800
mapping of results and so on, so yeah, so repository, it's like more than the CICOS CG United query,

10:42.800 --> 10:50.480
but the generated query can be used as the building block to implement repository, so here I would

10:50.480 --> 10:57.200
like to introduce CICOS plus plus, which is a playful name, yeah, so the idea was that this kind of

10:57.200 --> 11:04.320
glue code like the mapping and delegation is some boilerplate code to write, which we can ask

11:04.320 --> 11:12.960
AI to help us to do that, so we would just provide the interface, so we still do the CICOS

11:12.960 --> 11:21.280
preparation work, but for the repository layer, we provide the interface and then ask AI to generate

11:21.280 --> 11:29.760
and do the mapping between generated previously, the terministic generator, so what we get,

11:30.800 --> 11:37.520
so in the here, at top, what's been generated by CICOS, it's a ministically, we feel that

11:37.520 --> 11:44.560
into AI based generator, and then we also provide repository interface, our domain models, and we

11:44.560 --> 11:52.960
have some reference implementation, somewhere in GitHub, this explained to AI how we used to do that

11:53.920 --> 12:03.200
delegation, how we write the tests, integration tests for our database access, where and so on,

12:03.840 --> 12:12.160
so this is in the nutshell, so like I had this idea, a couple of times, some months ago,

12:12.160 --> 12:20.000
and I started with them to go and some simple text and play, but in April, it was a rise of those

12:20.000 --> 12:27.520
coding agents, I think Cloud Code was released, so I made this one migratory to custom commands,

12:28.240 --> 12:33.840
and I think next generation is actually extract what we have in the reference implementation

12:33.840 --> 12:41.520
and make a skew out of that, so we can leverage and skew, and skew is like, it's the next generation,

12:41.520 --> 12:49.120
I think, for this project, so for the purpose of of for them, I found this crash,

12:50.000 --> 12:58.240
crash is a coding agent, which I met some founders in conference and Florence, it's open source,

12:58.240 --> 13:06.720
the license is a bit tricky, it's FSL, so it's multi-model, and it has LSP, which is a language

13:06.720 --> 13:13.680
server protocol, I go police's implementation for this server protocol, so it has support for

13:13.680 --> 13:20.240
that, so in February it would spend less tokens, and in support extensions like agents,

13:20.240 --> 13:27.840
kills, and MCP and custom commands, so this is some example of custom command I can use for this project,

13:27.840 --> 13:33.760
for example, so I just define some number of steps, and I say just first run,

13:33.760 --> 13:40.960
SQL, C Generate, and then what you got as a result then collect that into some context,

13:41.120 --> 13:47.120
the content of those files, and take the repository interface, I would like to implement,

13:48.000 --> 13:54.880
and then do the generation of integration test first, it's not mentioned here, and then

13:54.880 --> 14:02.640
a repository itself, and then I make sure everything passes and compiles and so on, so yeah,

14:02.640 --> 14:09.360
and a couple of words about reference implementation of the project, so I use it,

14:09.440 --> 14:17.040
I make it a bit more complex comparing to the demo, I intend to show, so it uses actually two

14:17.040 --> 14:23.280
tables instead of one, so it has some transaction support, if we create an order, we have some

14:23.280 --> 14:29.600
order metadata, and then we have order items, and we insert them in the transaction, and if we delete

14:30.320 --> 14:35.920
some order, we also do that in transaction from multiple tables, and then it supports like

14:36.720 --> 14:43.200
create operation, and on top of that some search by filter, solve delete and so on, and I try to

14:43.200 --> 14:53.920
add a different few types for the gods, for them, for the posthus itself, so I try to show you the

14:53.920 --> 15:03.040
demo, so first I need to run crash, because it would, it would take some time, so as you can see,

15:03.040 --> 15:10.480
they have a goal and a zero protocol, which uses a go-police on my machine, and I have a goddalk

15:10.480 --> 15:17.920
MCP, this is a different MCP server from Danielle show today, but it has only one tool,

15:17.920 --> 15:26.480
it can read what the god documentation, but it does that in a bit more optimized way to save

15:26.480 --> 15:33.680
the tokens, and now I'm going to run my custom command, I have the custom command, it's called

15:33.680 --> 15:41.600
January 3, and I would do that for card, which I showed in the beginning, so it it read,

15:42.720 --> 15:52.720
I read my custom command, and it starts, it made a plan, it has seven steps here, and it

15:52.720 --> 16:02.320
doing some work right now, so use code so net, yes, and LSP, it also uses LSP, something is happening

16:02.320 --> 16:10.640
under the hood, so ideally it will help us to use fast tokens, and while it's working, it takes a couple

16:10.640 --> 16:18.560
of minutes, I will show you some source code of the demo project, what I have, so I have this

16:18.560 --> 16:25.920
is CQC config first of all, so I have to specify where my migrations are, and where to generate

16:25.920 --> 16:36.000
the queries, and what driver to use, and some additional parameters, and I can customize some types,

16:36.000 --> 16:43.280
like if there is ambiguity, what type in god should be used, for example, you ID, and then I can

16:43.280 --> 16:50.960
say, okay, which library I should use, and for time also, I want to use god time, but by default,

16:50.960 --> 17:01.840
it would use PGX time, so what else like my CQC data is here in this file, so I showed, it has

17:01.840 --> 17:10.400
free methods, so these free files, models, DB, they are result of generation, which happened just

17:10.400 --> 17:16.400
now before the demo, they were not here, so my domain model is card, as I showed before,

17:17.600 --> 17:26.080
money, some migrations, so yeah, this is important the card repository, so here I just provided

17:27.360 --> 17:39.200
the interface, and I expect that AI can implement, so it's not yet here, I should generate the files

17:39.200 --> 17:48.080
in repository, and for crash I use custom command, which I have to place here, also it supports

17:48.080 --> 17:56.880
Qs, which I deleted for now, but I can need to customize if you use crash, it doesn't know about

17:56.880 --> 18:04.160
Qs by default, but you need to to say where this Qs are, and here are permissions, which I use,

18:04.800 --> 18:11.440
so I don't have to be asked each time while it's working, but I would say that granularity of

18:11.440 --> 18:19.280
permissions is different from other coding agents, which I used before, so there is a room for them to

18:19.280 --> 18:38.000
improve that, so yeah, still doing something, okay, maybe I can, while it's working, yeah, so

18:39.280 --> 18:46.640
while it's working, I have a screenshot in case it wouldn't be working, so this is a recap for that,

18:47.600 --> 18:55.200
so I added custom command, and I just say to generate by the interface implement repository and it

18:55.200 --> 19:03.200
made the plan, it's working on the plan, yes, and it also would generate me integration tests, and for

19:03.200 --> 19:09.680
the integration tests I use in the reference implementation, I use testing containers to run a

19:09.680 --> 19:15.920
real puzzle list, to do like proper integration testing, and I use a suite package from

19:15.920 --> 19:24.960
a structured testifying, so I pop your library for like a certain require, but suite is another

19:24.960 --> 19:31.680
useful package, I think, because I can start the container, like it's good for a lifecycle container,

19:31.680 --> 19:37.840
I can start that before all the tests I run, and then shut down, like then all the tests are done,

19:38.800 --> 19:44.720
and the table tests for each repository method, and I use a couple useful libraries, go fake it,

19:44.720 --> 19:53.120
to generate some data on the fly, and compare by Google, which is kind of an official library

19:53.120 --> 20:00.960
to compare the structs for tests in a smart way, and this is the recap of the architecture of

20:00.960 --> 20:09.280
SQL C++, so it takes what SQL C deterministically generated, and then that's some delegation,

20:09.280 --> 20:16.240
and mapping by example, which is in the GitHub, and eventually we get repository, which only depends,

20:16.240 --> 20:23.840
and the main models, and I should be like fully tested and use, because integration tests are

20:23.920 --> 20:31.840
generated also, let's see what we have, still almost done, but not yet.

20:34.400 --> 20:40.240
Okay, we actually have some files being generated here, and repository, they were not here before we

20:40.240 --> 20:48.480
only had this interface, but now we have repository, which I got generated, you can see that

20:48.560 --> 20:56.480
from the previous version, which I have committed, it's not, it's not 100% the same,

20:56.480 --> 21:04.240
still a Y is involved, it doesn't generate exactly the same results, but functionally it's

21:04.240 --> 21:10.400
more a bit of syntax difference, but functionally it should be the same, and here are the integration

21:10.480 --> 21:17.840
tests, which use test containers, it starts like real posgres, and so on.

21:20.240 --> 21:27.600
Yes, and I think actually, yeah, it's actually done, okay, it's done, actually yes, it's done,

21:28.880 --> 21:36.160
yeah, so the code is done, and we have a report, and previously I used to have either code

21:36.640 --> 21:44.640
okay, so now it's like fully tested, I almost 85% coverage for all integration tests,

21:44.640 --> 21:52.640
and I'm running out of time, but I'm exactly at the end of my presentation, so take a

21:52.640 --> 22:00.800
waste at the dobb sqc, and you can adopt crash if you really like open source, like you can

22:00.800 --> 22:06.880
contribute to both projects, and I advise not to use sqc, it starts in the business mode,

22:06.880 --> 22:12.960
I want you to because it creates a tightly capped coupling, so I advise to use repository,

22:13.600 --> 22:22.560
pattern, but the boilerplate can be automated on this delegation and mapping, and augmented

22:22.560 --> 22:28.000
generation, by the reference implementation, it provides more stable results than just like saying,

22:28.320 --> 22:34.480
okay, you generate me data access layer, and thank you very much. Thank you!

22:40.240 --> 22:44.720
If you still have questions, you will be available in our hallway track for any of your

22:44.720 --> 22:48.880
SQL or AI questions. Thank you.

