Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thing: move into core the conversions to/from YAML code done by MainUI #4585

Open
lolodomo opened this issue Feb 2, 2025 · 61 comments
Open
Labels
enhancement An enhancement or new feature of the Core

Comments

@lolodomo
Copy link
Contributor

lolodomo commented Feb 2, 2025

When we discuss about new format for configuration files, one wish would be to use a common (YAML) syntax in MainUI code tab and in the new file format. To make this dream a reality, a required initial step is to replace the conversion to/from YAML currently hardcoded in MainUI by new REST APIs in core in charge of these conversions. MainUI would then call these new APIs when needed without handling YAML conversion by itself. This would also allow to show easily another format (like our current DSL format) in the code tab just by passing the requested format to the API.

@openhab/webui-maintainers : I need your help to identify what are the APIs you need for that. I guess two APIs are necessary, one to convert to YAML and one to parse YAML.

When entering in the code tab, you need to build the YAML code from the current internal data, these data are potentially different from the thing stored in the registry because the user may have started to change configuration parameters for example. In #4569, I already added the API GET /things/{thingUID}/syntax/generate but it is based on the thing in the registry. So I believe I should add another similar API where you can pass as parameter thing data (ThingDTO) like in the API to create a new thing, and the returned output is the generated YAML code: POST /things/{thingUID}/syntax/generate

Then when leaving the code tab, you need an API to parse the YAML code potentially updated by the user and retrieve the corresponding thing data (ThingDTO). I could add a new API POST /things/{thingUID}/syntax/parse with a parameter in which you provide the YAML code and the returned output is corresponding thing data (ThingDTO).

@florian-h05 : please confirm it would be OK and sufficient for MainUI needs.

In the code tab, I can see that all channels are listed. Does it mean you support adding/removing of channels through this tab ? Or do you show them just for information ? Or just to let the user update a channel configuration parameter ?
Of course in the future config file, the user should not define all these channels while they can be deduced from the thing type. So it is a point to clarify.

@lolodomo lolodomo added the enhancement An enhancement or new feature of the Core label Feb 2, 2025
@lolodomo
Copy link
Contributor Author

lolodomo commented Feb 2, 2025

If this initial step can be achieved, we can then discuss about small improvements of the syntax to make it potentially more user friendly and of course a compromise should be found.
Defining a thing does not require a lot of information, it is relatively simple. Current syntax defined by MainUI looks not so bad and I do not imagine big changes. And we should thing to users already used to the YAML syntax in MainUI code tab.
Note that these adjustments would then require no changes in MainUI as soon as MainUI is using APIs, and they will be automatically applied to what is displayed the code tab.

@lolodomo
Copy link
Contributor Author

lolodomo commented Feb 2, 2025

In the code tab, I can see that all channels are listed. Does it mean you support adding/removing of channels through this tab ? Or do you show them just for information ? Or just to let the user update a channel configuration parameter ?

The existing API to update a thing expects at input either no channels and no change of channels for the thing is done in this case, or a list of channels and the thing will then be updated to have only this list of channels. So MainUI needs to maintain a list of all channels for a thing to call the update API. That might be the reason why all channels are shown in the code tab ? I am also asking myself if the purpose is also to let user remove channels ?

That could be something preventing to have exactly the same YAML code if MainUI needs to display all existing channels while in a config file, the user would like to consider only non default channels. Even if the syntax itself remains common.

@FlorianSW
Copy link
Member

@FlorianSW : please confirm it would be OK and sufficient for MainUI needs.

Are you sure you wanted to ping me with this? :) I did not work on MainUI or openhab things in quite a while 🙈

@lolodomo
Copy link
Contributor Author

lolodomo commented Feb 3, 2025

Are you sure you wanted to ping me with this?

No, it was a mistake, my intention was or course to ping @florian-h05

@florian-h05
Copy link
Contributor

The proposed API are sufficient in general, it of course causes some network traffic but I think we can live with that.

The existing API to update a thing expects at input either no channels and no change of channels for the thing is done in this case, or a list of channels and the thing will then be updated to have only this list of channels. So MainUI needs to maintain a list of all channels for a thing to call the update API. That might be the reason why all channels are shown in the code tab ? I am also asking myself if the purpose is also to let user remove channels ?

a. I have only tried this for Things with unmodifiable channels, there it is possible to modify the channel config.
b. I would expect that for Things with modifiable channels, you can add and remove them from the code tab as well.

In both cases, having them in the code tab is a great help. I think in b, it should be easy to keep the current behaviour because you actually need to define the channels manually, a is more difficult to handle though.

@rkoshak
Copy link

rkoshak commented Feb 4, 2025

MainUI would then call these new APIs when needed without handling YAML conversion by itself. This would also allow to show easily another format (like our current DSL format) in the code tab just by passing the requested format to the API.

Could it make sense to supply the mime type as part of the request instead of creating a whole new API endpoint? If the request includes "application/yaml" instead of "application/json" and it defaults to "application/json" could that work? DSL could use "application/dsl" and so on when/if new formats are developed. Then on the server side it knows what format is desired and can convert (as necessary) back and forth for new and updates to stuff.

Or is that what you are already proposing (it doesn't seem so but I could be wrong)? Or is there something about doing that which wouldn't work?

b. I would expect that for Things with modifiable channels, you can add and remove them from the code tab as well.

I can confirm this, at least when it comes to MQTT and HTTP bindings.

That might be the reason why all channels are shown in the code tab ? I am also asking myself if the purpose is also to let user remove channels ?

My understanding is it just shows the raw JSON returned by the API converted to YAML. I don't think there is any more reasoning than that. And the raw JSON returned by the API includes all the Channels, as would be required to show all the Channels in the Design tab.

To complicate a. a little more, what if I want to modify a Channel that I haven't modified before? There'd need to be a way to only show modified Channels and show all Channels and switch between the two.

@lolodomo
Copy link
Contributor Author

lolodomo commented Feb 9, 2025

Could it make sense to supply the mime type as part of the request instead of creating a whole new API endpoint?

No because we have currently no API matching UI needs.
Your remark could be more appropriate to what I initially implemented in #4569 but doing that will be a breaking change for most common APIs (getting a thing or an item) and the existing APIs already contain parameters that are more or less specific to JSON output. And I need also a new parameter not needed for JSON output. Don't want to over complicate the existing APIs.
But I will study the idea again, at least for the API returning the list of all items and the API returning the list of all things.

I have now updated the new APIs in #4569 to limit the new APIs and to cover what is needed by MainUI.

And in parallel, I already made a POC to implement YAML for things, it is relatively simple, not a lot of code lines because of great work done by @J-N-K on YAML registry after my initial contribution for YAML tags . Maybe I will include everything in #4569 .

@spacemanspiff2007
Copy link
Contributor

spacemanspiff2007 commented Feb 9, 2025

Could it make sense to supply the mime type as part of the request instead of creating a whole new API endpoint?

I thinks that's very unusual and would be both hard to teach and document e.g. in api explorer.
The most common way another output format can be selected is typically a query parameter.
I think a dedicated api endpoint that does the conversions is much nicer both usability and documentation wise.

@rkoshak
Copy link

rkoshak commented Feb 10, 2025

I thinks that's very unusual and would be both hard to teach and document e.g. in api explorer.

I guess I don't understand why accepts media type header field exists in the first place then if it can only ever support the one media type.

Image

@florian-h05
Copy link
Contributor

Accept Media type is a HTTP header used to signal the server which media type the client accepts. It’s rather technical IMO and nothing to use to say I want format A or B.
It basically signals which technical format the browser accepts the response in.

@FlorianSW
Copy link
Member

Could it make sense to supply the mime type as part of the request instead of creating a whole new API endpoint?

I thinks that's very unusual and would be both hard to teach and document e.g. in api explorer.
The most common way another output format can be selected is typically a query parameter.
I think a dedicated api endpoint that does the conversions is much nicer both usability and documentation wise.

Just sliding in for this single comment, so sorry if this is out of context :)

It's, from my experience, a workaround to represent different formats to serve by the server using a query Parameter. A much more standard way is in fact using Server-Side content-negotiation. This includes format, language and encoding, if supported by the server. See this MDN doc for it: https://developer.mozilla.org/en-US/docs/Web/HTTP/Content_negotiation

It's also untrue that the Accept header wouldn't reflect a preference of a client (format A rather than format B). It's intentionally meant for that. It's a list of media types the client is able to process, which can, optionally, also include a relative preference to each other. E.g. an accept header value can express "give me json over XML, while XML is also ok if the resource is unavailable in json, but never provide me csv". The server then needs to respond in either of the requested mime types, preferably with the one the client wants the most. If none of the requested formats is available, a status code of 415 should be the response.

So without being really involved here anymore, rather than having a query Parameter, using content negotiation would feel more naturally for an API, IMHO.

@mherwege
Copy link
Contributor

It's, from my experience, a workaround to represent different formats to serve by the server using a query Parameter. A much more standard way is in fact using Server-Side content-negotiation. This includes format, language and encoding, if supported by the server. See this MDN doc for it: https://developer.mozilla.org/en-US/docs/Web/HTTP/Content_negotiation

I am not an API expert, but I must say I am using API's at work where the response is JSON or XML, depending on the Accept header value in the request. So I am with @FlorianSW on this. I see this as perfectly valid use of the Acceptheader and I don't think it makes using the API more complicated. We are talking about the same response, but in a different format. A generic translation endpoint could then be sent something in one format, and return in another format, depending on the Accept header.

@spacemanspiff2007
Copy link
Contributor

The issue is that depending on the target format different input structures for a post request are required.
So it's not accessing the same resource but a different functionality which requires a different input.
As far as I know it's not possible to document different input structures based on the content headers which means the api documentation has to be done manually (e.g. in the openHAB docs) which is undesirable.
For returning e.g. all existing items this could be used, but it would move file based parts from one endpoint to another, so this is rather unexpected. However I don't have any strong opinion on that.

@FlorianSW
Copy link
Member

The issue is that depending on the target format different input structures for a post request are required.
So it's not accessing the same resource but a different functionality which requires a different input.

I'm not sure I follow this completely, so my answer might be a bit off. Please correct me, if one of my assumptions is incorrect :)

When POSTing to an endpoint, the Accept header still only applies to the answer response, not the payload of the body in that request. For that, a HTTP request should use the Content-Type header in the request to indicate what mime type the body of the request has. Using that, the structure/format of the request can be different depending on the mime type (obviously). However, the logical result of that representation should, for the same endpoint, be the same, isn't it? No matter if it is in JSON, YAML or CSV.

This can then also be documented in an OpenAPI spec, something like this should do the trick, iirc:

paths:
  /things/{thingId}:
    post:
      requestBody:
        required: true
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/ThingUpdate'

          text/yaml:
            schema:
              $ref: '#/components/schemas/ThingUpdate'

(the API endpoint and anything else is just an example, it's merely to give an example about the doc :))

@spacemanspiff2007
Copy link
Contributor

spacemanspiff2007 commented Feb 11, 2025

When POSTing to an endpoint, the Accept header still only applies to the answer response, not the payload of the body in that request.

But when selecting a different file format the body for the request has to be different.
E.g.
a conversion endpoint which converts a list of things to a things file or a list of items to an items file.
Using the content type there is not a good idea since depending on the desired output the input has to be different.
It's better to make two sub-endpoints which document the required input accordingly.
I hope this makes it more clear what I ment with

So it's not accessing the same resource but a different functionality which requires a different input.

@mherwege
Copy link
Contributor

mherwege commented Feb 11, 2025

But when selecting a different file format the body for the request has to be different.

Yes, and you specify that with the content-type request header.

So it is a combination of a content-type header and an accept header. The endpoint should then give an error when it cannot do the conversion between the two types, or return the converted content when it can.

If you keep the translation fuction within the item / thing endpoint, this would do it. If it would be a generic translation service, you will also need a path or query parameter to define what it is you are translating (e.g. item(s), thing(s), sitemap). I think this will avoid a proliferation of extra endpoints.

@spacemanspiff2007
Copy link
Contributor

spacemanspiff2007 commented Feb 11, 2025

Yes, and you specify that with the content-type request header.

But the content-type for the request is in both cases application/json.
It's a list of items or a list of things as json. How will you document that?

To make a complete example:

[{"name": "MyItemName", "label": "MyLabel"}]

should return

Item MyItemName "MyLabel"

and

[{"UID": "MyThingName"}]

should return

Thing MyThingName

How will you document the required input structures in the openAPI spec if it's the same endpoint?

@FlorianSW
Copy link
Member

I think the main difference here is the way how such an API would look like. E.g.:

Variant 1:
If you keep the api to request and update an item as it is (e.g. a Thing, or an Item), then the endpoint would look like this:
/api/things/{thingId}

Making a GET on that endpoint with an Accept header of application/json will return the thing in a JSON format:

[{"UID": "MyThingName"}]

Doing the same request with a mime type in the Accept header that represents the DSL of openhab (which needs to be defined, but assuming it is something like text/vnd.openhab-dsl), it would return the same thing in the DSL representation:

Item MyItemName "MyLabel"

The very same thing would be done when updating a thing, just with a POST request. In that, the Content-Type header indicates the mime type of the request (again, JSON, DSL, or any other supported format). It's then the task of the API endpoint to convert the chosen representation of the client into the representation that is than saved into the database or file or where-ever.

With this approach, there is no separate API endpoint to do conversion of structures, as the respective endpoints of the obejcts themself can do the necessary conversion out of the box.

Variant 2:
If you would like to keep the existing endpoints and want to create a separate conversion endpoint (which I'm not sure what value that actually brings), then you're right, you would need to have some way to indicate what body will contain, an item, a thing or whatever. You could then, however, use a conversion endpoint per type, something like:
/api/convert/thing and /api/convert/item where the last head of the URI (item, thing) is an identifier of what type you want to convert.

As you can see with this, using this approach in fact only duplicates the number of endpoints, as you would need to have a conversion endpoint per type, while you still need the GET and POST endpoints for the types anyway.

I hope it makes sense what I want to express here :)

@mherwege
Copy link
Contributor

mherwege commented Feb 11, 2025

I think we are talking past each other here.

The input could also be:

---
- name: MyItemName
  label: MyLabel

and then then content-type should be text/yaml. With accept set to application/dsl the output would still be:

Item MyItemName "MyLabel"

Or the other way around, input content-type equal to application/dsl:

Item MyItemName "MyLabel"

and acceptset to application/json would give:

[{"name": "MyItemName", "label": "MyLabel"}]

You will need to distinguish between what your are converting (items, things, sitemaps...), and that would be in the subpath or a query parameter.

@mherwege
Copy link
Contributor

mherwege commented Feb 11, 2025

With this approach, there is no separate API endpoint to do conversion of structures, as the respective endpoints of the obejcts themself can do the necessary conversion out of the box.

The reason for a separate endpoint would be to have the UI request the translated syntax without committing it to OH yet. Right now, the UI has its own set of parsers and generators to do that translation. And you are able to edit yaml in the UI (and DSL for sitemaps in the UI). You would need a way to switch between the full UI representation or whatever textual format without committing it yet. Having a REST endpoint to get the translation from core would allow to keep it all in sync in core and remove the parsers and generators from the UI. They are difficult to keep in sync as it is now and core can leverage what is already there.

But it doesn't have to be one or the other. I can see value in having this on the item/thing endpoint, and also have a translation endpoint that does not commit. The UI is ultimately responsible for committing then.

@spacemanspiff2007
Copy link
Contributor

It's nice that you two are so eager to comment but both of you have not answered my question at all but rather made suggestions how something totally different could look like.
I'll try again:
The goal is to provide an endpoint that does file format conversion. It shall not access existing objects at all.
My suggestion is that there are two endpoints (just an example):

POST create_thing_file
POST create_items_file

This is nice because on both api endpoints the request body structure can be properly described in the api docs.
I just have to look at the api description and know what data structure I have to pass in.

Both you suggest that the content-type is used to describe the input structure.
E.g. a single api endpoint

POST create_file

and the content-type is used to select whether an thing-file or an items-file is returned.
I'm aware that this is possible programmatically. The question that is still unanswered for me:
The input structure for an items-file is different from the input structure for a things-file - how can the input structure properly be documented in the api description?

want to create a separate conversion endpoint (which I'm not sure what value that actually brings

The use case where one wants to transform only one already existing item or all existing items is very limited.
It's rather transforming a subset of items or a subset of items that don't exist yet (e.g. as a suggestion from UI) that will be passed in.
If I have to first create the items in UI, then create the transformation and then delete the items in UI again just to create a file format transformation the feature is worthless.
Also future file formats will most certainly have a mixture of object descriptions (things, items, transformations, persistence) so this wouldn't work with the suggested approach at all.

But it doesn't have to be one or the other.

That's why I wrote above:

For returning e.g. all existing items this could be used

@mherwege
Copy link
Contributor

mherwege commented Feb 11, 2025

The goal is to provide an endpoint that does file format conversion. It shall not access existing objects at all. My suggestion is that there are two endpoints (just an example):

POST create_thing_file
POST create_items_file

This may be your goal. But I am looking at it from different perspectives. First look at the title of the issue: to/from YAML. It doesn't specify the other format you convert to/from. It talks about moving into core. So that assumes we are talking about JSON to/from YAML conversion, not DSL. Because the JSON/YAML conversion is currently what is done in the UI. There is no DSL to YAML conversion in the UI.
In the UI as far as DSL is concerned, what you have right now is:

  • import item/thing DSL: parses and stores as mananged items (JSON format to REST API)
  • sitemap DSL: parse and generate (convert to/from JSON, no YAML involved)

The related PR #4569 is just about generating DSL syntax for things/items.

I want to look at it more holistically. We already have 3 formats here (JSON, YAML, DSL) and we want to convert to/from, without directly committing anything. And there might be other formats in the future. That's my base premise. So I think setting the input format and output format would be perfectly fine using accept and content-type headers.
I never said we should use this to indicate if it is about items or things (or sitemaps, where it would be useful as well in my view). That should be:

  • or a subpath in the REST call
  • or a query parameter.

And that is exactly what you propose as well except that you always assume DSL as the output. There is no such thing as an item format or a thing format. There is a DSL format for items, a YAML format for items, a JSON format for items...
Having to define the type (item, thing, ...) as a parameter has its problems as well. You state yourself:

Also future file formats will most certainly have a mixture of object descriptions (things, items, transformations, persistence) so this wouldn't work with the suggested approach at all.

In this case, you cannot even rely on a subpath or parameter (or a totally separate endpoint). It has to be encoded in the file itself what the entity (entities) is (are) that you are converting. That would require an extension to the language syntax, some form of a wrapper array in the data itself. That's the only way you would be able to split it out. But you can still enforce the content type to be consistent for the whole call (if you define something in DSL, everything in it should be in DSL).

@spacemanspiff2007
Copy link
Contributor

And that is exactly what you propose as well except that you always assume DSL as the output. There is no such thing as an item format or a thing format. There is a DSL format for items, a YAML format for items, a JSON format for items...

I know that it's dsl and I know that there is YAML from UI. It's an example because the output is irrelevant for the point that there's no way to document the required input structure.

In this case, you cannot even rely on a subpath

Why would it not work? In a subpath I can properly document the required input structure.
It's actually the only way to do it.


I have tried multiple times to explain that my issue is the lack of documentation of the required structure (not format!).
The only proper way to achieve this is a subpath per file-format.
It's not possible with content-type headers.
The headers only work if you assume that the information of all the formats is the same - which is not guaranteed.
Thus it's impossible to directly convert between formats, one has always parse-to and create-from json (DTO objects).
That's also the only way the documentation shows the proper input/output structure.

@mherwege
Copy link
Contributor

mherwege commented Feb 11, 2025

Sorry if I didn't understand what you are trying to say.

So, correct me if I am wrong, but your concern is that the message body example and schema would be auto-generated for application/json (and also application/xml if you would want to add that as a consumer annotation to the endpoint), but not for the other content types. So you could only have the full schema defined for json input or output, while the others would be basically string input with no DTO mapping. Did I get this right?

Why would it not work? In a subpath I can properly document the required input structure.

Looking at the code, this documentation is auto-generated. So I don't think it is easy to do that for custom defined formats anyway. You can only have string as an alternative type and that would just show as such: string.

I can see the complication in coding, but I don't see why that should impact the REST endpoint structure. In the end, if the input is DSL or YAML, the best we have now is string as an input parameter. That does not auto-document either at all, even if you create a separate endpoint for it. So I would argue a translation API should just have string as input and output, but depending on the header set, interpret the content of the string differently.

Thus it's impossible to directly convert between formats, one has always parse-to and create-from json (DTO objects).
That's also the only way the documentation shows the proper input/output structure.

Yes, maybe I am making the wrong assumption that the formats have all info and you can translate in all directions. Still, wouldn't that mean that?

  • to JSON: you could use a content-type header to tell the endpoint what content language (YAML, DSL...) you are providing
  • from JSON: you could use an accept header to tell the endpoint what content language (YAML, DSL...) it should return

And if that is possible, what is stopping you from internally converting to JSON (and that would not be what happens, it would convert to the DTO) and then to the requested format in the same call?

@mherwege
Copy link
Contributor

I did a quick check, modifying one of the things endpoints. I can have multiple accept types and have different documentation by accept type, just by changing the annotations. Here is a little video illustrating it.

Image

In the video, the underlying DTO does not change, but that would be possible as well.

@lolodomo
Copy link
Contributor Author

lolodomo commented Feb 12, 2025

I was not expecting such "intense" discussion already on the REST API itself... I let you find a conclusion, the API part is for me more or less a detail and a small part of my changes. During this time, I will finalize the changes to be sure that the POST APIs will not use at all the items/things registries.

@mherwege : if several media types are proposed by the API as output, how to you define and annotate the method to let it know that the API will return a ThingDTO.class in case of media type "JSON", a String.class in case of media type "YAML" and a String.class in case of media type "DSL" ? How a java method could return different thing types ? Should I use Object ? Can you provide an example please ?

Same question for POST (input) body, how do you define a "variable" input type ? I would also appreciate an example. Should I use generic Object class for my method parameter type ?

@spacemanspiff2007
Copy link
Contributor

spacemanspiff2007 commented Feb 12, 2025

Yes, maybe I am making the wrong assumption that the formats have all info and you can translate in all directions. Still, wouldn't that mean that?

* to JSON: you could use a `content-type` header to tell the endpoint what content language (YAML, DSL...) you are providing

* from JSON: you could use an `accept` header to tell the endpoint what content language (YAML, DSL...) it should return

But if you parse a file format that contains thing data as input you have to set the output DTO based on the request content-type. Can you show an example how that would work in the docs?

E.g.

parse from thing-yaml   -> output format: json, output schema ThingDTO | input format: yaml, input schema YamlThingsDto
parse from items-yaml   -> output format: json, output schema ItemsDTO | input format: yaml, input schema YamlItemsDto

create from things-json -> output format: yaml, output schema YamlThingsDto | input format: json, input schema YamlThingsDto
create from items-json  -> output format: yaml, output schema YamlItemsDto  | input format: json, input schema YamlItemsDto

My actual goal is simple:
I just want that the Schema for both request and response in the api docs is correct and not overly permissive.

@mherwege
Copy link
Contributor

@mherwege : if several media types are proposed by the API as output, how to you define and annotate the method to let it know that the API will return a ThingDTO.class in case of media type "JSON", a String.class in case of media type "YAML" and a String.class in case of media type "DSL" ? How a java method could return different thing types ? Should I use Object ? Can you provide an example please ?

Have a look here: main...mherwege:openhab-core:yaml

I patched the getThings endpoint to be able to retrieve the things in JSON or YAML. The YAML output is not correct yet, as I didn't fully implement the conversion (I basically copied JSON and slightly modified without paying much attention to it), but that's not the point. It is a quick prototype. And it shows JSON/YAML conversion would almost come for free as long as the object backing the YAML format does not derive from the object backing the JSON format. If it does, you can still solve that.

Note that all REST calls in the code return a Response object. What is being updated is purely internal to the method.
Swagger documentation depends on the annotations and that's where you can distinguish between media types and have different objects for documentation attached. I did distinguish in the code, but at the moment refer to the same object (in which case distinguising wouldn't be required, you can just keep one @Content annotation.

This example is for a producer, but a consumer can be done in the same way.

DSL is most likely much more tricky from a documentation perspective and I didn't investigate deeper. In the worst case, we should have to treat the whole DSL input/output as a String object. But it might be possible to do better. I didn't investigate further.

@lolodomo
Copy link
Contributor Author

Also I was under the impression that the feature would allow it to select multiple items/things in the UI and then generate the corresponding code view.
Or generate multiple items with links and metadata from a dsl definition.
Is that not the case?

No, it was not discussed yet to provide such new features. But I agree we can open a door for that at API level for the future.

@lolodomo
Copy link
Contributor Author

Since it's just a transformation and doesn't touch existing things/items it should be a separate transformation endpoint.

@mherwege : is it so obvious for you too ?

@lolodomo
Copy link
Contributor Author

lolodomo commented Feb 15, 2025

APIs now accept several items/things.

Image

Image

I still need to fully unplug access to the registries from these APIs, it is not yet the case.

@mherwege
Copy link
Contributor

mherwege commented Feb 15, 2025

No fields are editable in this context so this should have been blank ...

They would be for managed items. Non-managed shows the same fields, but with the lock icon. So you are correct in this case. But for managed items, what is shown is editable.

However I see no use case at all for managing objects in the registry with something different than the json schema that we have today

This point is very important and I agree 100% with @spacemanspiff2007 on that point. My intention has never been to drive our APIs managing our items/things registries with another format than JSON. So for example there is absolutely no need to enhance the current API that updates a thing in registry. This API use JSON as input and that is fine.

I am fine with that. I agree. But I return the question. What is that canonical JSON format we are talking about? Is it the existing REST endpoints? Or is it the internal UI object structure that gets translated in one or multiple JSON’s for one or multiple REST calls? That object structure is sometimes also different by UI page.
The YAML we see in the UI is a direct translation of that internal object structure. And that represents the data in a specific UI page. It does not match one-on-one with any existing REST endpoint. The UI code complexity of doing that translation between internal object structure and YAML is low. So I am wondering if we really make things easier by doing that in core. The translation in core would not be based on the real structure of the UI internal object. So there will be processing in the UI to potentially add fields to the call and remove from the response with logic to go with it.

I see more use for it when converting to/from DSL, or for converting to a YAML alternative for DSL.

I agree showing JSON in a code tab would not be very useful, although it is done for sitemaps (both DSL and JSON) on the same screen. But the UI uses YAML for a reason. It is more human readable. The YAML and internal object representation (visualized as JSON) are just another visualization of the same internal UI object.

For 11, I do not understand, can you provide an example of what is missing ?

Indeed, I stand correct the item GET includes metadata and links. Adding or updating requires 3 endpoints.

Sorry, I don't understand, where is the problem with items in DSL ? They embed all metadata and channel links.

The item edit view in the UI does not include item links and metadata in the same place. So the YAML code view also does not have it. And the internal object representation does not have it. So there is a risk of losing information when you count on it being complete.
Links and metadata are maintained in separate pages with different underlying object models. Trying to have all in the code views makes the content of the code view and UI representation out of sync.

For 1 and 3, we could have an API taking a ItemDTO JSON as input and with a String as output, either YAML or DSL depending on accept header. The accept header is an alternative to my current "format" request parameter.

It doesn’t have to be a string. As I illustrated in the POC you could define a YAML output as such and have it fully documented as YAML.

I am still having a separate new GET APIs to get all items/things in file format but I could merge them with our existing APIs if you are convinced this is better even with all the existing API parameters and the one I need to add..

I am OK with separate endpoints.

It has to be a list of things because dsl allows the bridge/thing syntax:

Agreed in the context of exporting existing things to a DSL file format, not in the context of the code view for editing a single thing (as it is now). The JSON also does not have the nesting.

Or generate multiple items with links and metadata from a dsl definition.
Is that not the case?

The item import function already does that at the moment, but the UI parser is unable to cope with the metadata. And this is where @lolodomo work will be a big improvement, doing all that in core.

@mherwege : is it so obvious for you too ?

Yes, it is. I got carried away envisioning to use it also on existing endpoints, but I never questioned there was a need for separate endpoints.

I like this idea.

@lolodomo
Copy link
Contributor Author

lolodomo commented Feb 15, 2025

What is that canonical JSON format we are talking about? Is it the existing REST endpoints? Or is it the internal UI object structure that gets translated in one or multiple JSON’s for one or multiple REST calls? That object structure is sometimes also different by UI page.
The YAML we see in the UI is a direct translation of that internal object structure. And that represents the data in a specific UI page. It does not match one-on-one with any existing REST endpoint. The UI code complexity of doing that translation between internal object structure and YAML is low. So I am wondering if we really make things easier by doing that in core. The translation in core would not be based on the real structure of the UI internal object. So there will be processing in the UI to potentially add fields to the call and remove from the response with logic to go with it.

The JSON we are talking about is the DTO objects we have in core that are used by our REST APIs.
I was not aware of these other dedicated JSON objects handled in Main UI.

Remember that one of the initial wish is to have in Main UI code tab a YAML code that is the same as what will be used in the future as file format. The idea is not to have a specific YAML format for Main UI. All this discussion started with that assumption and @florian-h05 told us it was doable. So this is something to clarify quickly with @florian-h05 .

The item edit view in the UI does not include item links and metadata in the same place. So the YAML code view also does not have it. And the internal object representation does not have it. So there is a risk of losing information when you count on it being complete.
Links and metadata are maintained in separate pages with different underlying object models. Trying to have all in the code views makes the content of the code view and UI representation out of sync.

Until Main UI is enhanced to support channel links and metadata in code tab, Main UI could call the API with no channel link and no metadata so that they will not be displayed in code tab. But of course, in that case, what is displayed in code tab is not the full file format but rather a partial format.

It doesn’t have to be a string. As I illustrated in the POC you could define a YAML output as such and have it fully documented as YAML.

Ok, I will have a look.
But for DSL, I guess we need String.

I am still having a separate new GET APIs to get all items/things in file format but I could merge them with our existing APIs if you are convinced this is better even with all the existing API parameters and the one I need to add..

I am OK with separate endpoints.

Perfect, I prefer to separate them.

It has to be a list of things because dsl allows the bridge/thing syntax:

Agreed in the context of exporting existing things to a DSL file format, not in the context of the code view for editing a single thing (as it is now). The JSON also does not have the nesting.

But Main UI could call the new API providing an array of one unique JSON element (item or thing), no ? Generated YAML or DSL will then only contain one item or one thing.

@mherwege : is it so obvious for you too ?

Yes, it is. I got carried away envisioning to use it also on existing endpoints, but I never questioned there was a need for separate endpoints.

Ok, so I am going to move the new APIs to a new separate /file-format endpoint as suggested by @spacemanspiff2007 .

@spacemanspiff2007
Copy link
Contributor

Agreed in the context of exporting existing things to a DSL file format, not in the context of the code view for editing a single thing (as it is now). The JSON also does not have the nesting.

I'm aware, the json will return two separate thing definitions which is why a list is required.
That's why I argued that parsing/creating dsl will have a different input/output schema than parsing/creating UI-yaml.

The JSON we are talking about is the DTO objects we have in core that are used by our REST APIs.

That's my understanding, too. The individual objects are json DTOs of the core objects.
The return or input schema is a structure which is composed of these objects. e.g. for items-dsl

{
  "items": [ItemDto, ...],
  "links": [LinkDto, ...]
}

When parsing a format one will have the data in the correct format to feed it to the other endpoints.
When creating a format the returned values from other endpoints can directly be used to create the desired format.
If the goal is to create only one element a list with a single entry can be provided.

@mherwege
Copy link
Contributor

mherwege commented Feb 16, 2025

But for DSL, I guess we need String.

I think it must be possible as well, but a whole lot more complex. It is all about the automatic endpoint documentation, if you will have a documented data structure in swagger or not. That’s something that can always be refined later.

@lolodomo
Copy link
Contributor Author

lolodomo commented Feb 16, 2025

There is now no existing endpoints updated. Instead, there is this new endpoint:

Image

@mherwege
Copy link
Contributor

I think it must be possible as well, but a whole lot more complex. It is all about the automatic endpoint documentation, if you will have a documented data structure in swagger or not. That’s something that can always be refined later.

See my answer in the PR. It is not complicated at all to give a proper example for DSL.

@rkoshak
Copy link

rkoshak commented Feb 17, 2025

just not shown in the code view. This could be fixed.

I'll create an issue for this.

Not currently displayed in Main but available in DSL. Of course I will update current YAML to provide these missing information.

Or is there already an issue open for this? Or it's not needed?

Links and metadata are maintained in separate pages with different underlying object models. Trying to have all in the code views makes the content of the code view and UI representation out of sync.

This makes the code view of Items unique among OH entities. For consistency sake, if for no other reason, there's a strong argument to making a code tab show the code for everything you see on the Item's page which then should include everything that you can define for a DSL Item. Maybe we add a code tab to the main Item's page but keep the code tab when editing the Item and the metadata. The link is not represented with a code tab anywhere.

@mherwege
Copy link
Contributor

mherwege commented Mar 4, 2025

@lolodomo Do you already have code to convert DSL to the internal JSON format? I really would like to get rid of the nearly parsers in the UI, as it duplicates the DSL to JSON conversion in the UI with a separate parser. That would replace the DSL items import nearly parser code, and would open the door for things, sitemaps and persistence as well. As you know, I am particularly interested in sitemaps as we have both a DSL nearly parser and a DSL generator in the UI, with all consequential difficulties to maintain it in sync. But I am waiting to see how it is done for the other formats. And replacing the items import by a call to a REST endpoint for the translation would be a first step.
To me, this has little to do with YAML at this point. YAML is just another UI representation of an internal UI javascript object, that is actually JSON when used in other REST endpoints. That conversion in the UI: internal <--> JSON and internal <--> YAML is transparent. I can see making that a REST call as well, but I don't consider it a requirement inside the UI. That's the bigger discussion about the shared YAML format.

lolodomo added a commit to lolodomo/openhab-core that referenced this issue Mar 4, 2025
Related to openhab#4585

POST /file-format/items/parse to parse items from file format
POST /file-format/items/create to create items in file format
POST /file-format/things/parse to parse things from file format
POST /file-format/things/create to create things in file format

Signed-off-by: Laurent Garnier <[email protected]>
@lolodomo
Copy link
Contributor Author

lolodomo commented Mar 4, 2025

@mherwege : I submitted a WIP PR just to show the current state and how parsing is achieved. Everything is not yet ready, the retrieval of metadata when parsing DSL items is not yet done for example.

lolodomo added a commit to lolodomo/openhab-core that referenced this issue Mar 4, 2025
Related to openhab#4585

POST /file-format/items/parse to parse items from file format
POST /file-format/items/create to create items in file format
POST /file-format/things/parse to parse things from file format
POST /file-format/things/create to create things in file format

Signed-off-by: Laurent Garnier <[email protected]>
lolodomo added a commit to lolodomo/openhab-core that referenced this issue Mar 4, 2025
Related to openhab#4585

POST /file-format/items/parse to parse items from file format
POST /file-format/items/create to create items in file format
POST /file-format/things/parse to parse things from file format
POST /file-format/things/create to create things in file format

Signed-off-by: Laurent Garnier <[email protected]>
lolodomo added a commit to lolodomo/openhab-core that referenced this issue Mar 4, 2025
Related to openhab#4585

POST /file-format/items/parse to parse items from file format
POST /file-format/items/create to create items in file format
POST /file-format/things/parse to parse things from file format
POST /file-format/things/create to create things in file format

Signed-off-by: Laurent Garnier <[email protected]>
@lolodomo
Copy link
Contributor Author

lolodomo commented Mar 4, 2025

And replacing the items import by a call to a REST endpoint for the translation would be a first step.

I agree, that could be a good first usage of these new APIs.

That would replace the DSL items import nearly parser code, and would open the door for things, sitemaps and persistence as well.

And that would be also a solution to easily implement import of DSL things.

I am in fact building globally something that will make easy going from unmanaged to managed things/items and vice versa.

lolodomo added a commit to lolodomo/openhab-core that referenced this issue Mar 7, 2025
Related to openhab#4585

POST /file-format/parse to parse file format
POST /file-format/create to create file format

Signed-off-by: Laurent Garnier <[email protected]>
lolodomo added a commit to lolodomo/openhab-core that referenced this issue Mar 7, 2025
Related to openhab#4585

POST /file-format/parse to parse file format
POST /file-format/create to create file format

Signed-off-by: Laurent Garnier <[email protected]>
lolodomo added a commit to lolodomo/openhab-core that referenced this issue Mar 8, 2025
Related to openhab#4585

POST /file-format/parse to parse file format
POST /file-format/create to create file format

Signed-off-by: Laurent Garnier <[email protected]>
lolodomo added a commit to lolodomo/openhab-core that referenced this issue Mar 8, 2025
Related to openhab#4585

POST /file-format/parse to parse file format
POST /file-format/create to create file format

Signed-off-by: Laurent Garnier <[email protected]>
lolodomo added a commit to lolodomo/openhab-core that referenced this issue Mar 8, 2025
Related to openhab#4585

POST /file-format/parse to parse file format
POST /file-format/create to create file format

Signed-off-by: Laurent Garnier <[email protected]>
lolodomo added a commit to lolodomo/openhab-core that referenced this issue Mar 8, 2025
Related to openhab#4585

POST /file-format/parse to parse file format
POST /file-format/create to create file format

Signed-off-by: Laurent Garnier <[email protected]>
lolodomo added a commit to lolodomo/openhab-core that referenced this issue Mar 9, 2025
Related to openhab#4585

POST /file-format/parse to parse file format
POST /file-format/create to create file format

Signed-off-by: Laurent Garnier <[email protected]>
@lolodomo
Copy link
Contributor Author

lolodomo commented Mar 9, 2025

@florian-h05 @mherwege @jimtng : PR #4630 will allow to add a new tab that could be named "File format" in the item configuration page and the thing configuration page. Even if it should be also possible to let the user adjust item/thing by updating DSL code in this new tab, a first step probably not too much difficult to achieve in Main UI would be to provide the DSL code in this new tab as read-only. You could retrieve the DSL code just by calling the API /file-format/create using accept header text/vnd.openhab.dsl.item or text/vnd.openhab.dsl.thing, providing a FileFormatDTO object as request body that contains the current definition of item or thing. That would be fantastic to have that new tab.
And the day we have a new file format (like YAML), you will just have to change the accept header of the API request to provide a different file format.
The idea is then at this step to keep the current specific Main UI YAML code in the code tab and add a new separate tab for file format code (starting with a read-only content).
WDYT ?

@Nadahar
Copy link
Contributor

Nadahar commented Mar 9, 2025

This is a lot to read, so I probably haven't gotten all the details, but I'd just like to add a couple of comments:

  • In many places, MainUI converts between JSON and YAML each time the user changes tab between "Design" and "Code". JSON and YAML are basically just two different "syntaxes" for describing objects, conversion should be straight forward and standard. Is it really wise to introduce API calls, with resulting waiting time, each time the user wants to switch tabs? Many OH installations run on relatively weak hardware, so even if this is "quick" when running development testing, this might not be the reality when in production use out there.
  • I had no idea that a conversion to DSL already existed, to me that sounds like a bad idea unless you can get the Eclipse EMF framework to do it as an exact reverse of the parsing. I don't think YAML, JSON and DSL should be seen as just 3 different file formats. JSON and YAML are different formats/syntaxes expressing the same thing. DSL is something entirely different in my view, it's closer to a "narrow scripting language". Trying to "generate DSL" sounds to me like trying to write a decompiler, not something that I would think would have a great chance of success.

@lolodomo
Copy link
Contributor Author

lolodomo commented Mar 9, 2025

Trying to "generate DSL" sounds to me like trying to write a decompiler, not something that I would think would have a great chance of success.

It was implemented with success in a relatively easy way! Thanks to XText.

@Nadahar
Copy link
Contributor

Nadahar commented Mar 9, 2025

It was implemented with success in a relatively easy way! Thanks to XText.

That's impressive. Still, I'd claim that one should differentiate between the "source DSL" and the generated DSL - there are different ways to express things, and the "decompiled" result might not resemble the original. Also, comments are probably lost due to JSONs limitation.

That's one thing that actually annoys me with the current scheme where JSON is the "base format": The fact that JSON can't hold comments. YAML (as much as I dislike it in general) can, and when making widgets for example, it's very frustrating that all comments disappear when you save. It makes it difficult to keep track of things. Personally, I copy/paste the YAML back and forth to Notepad++ and doesn't actually "use" the JSON DB as the storage. That's how bad I think that is.

@rkoshak
Copy link

rkoshak commented Mar 12, 2025

The fact that JSON can't hold comments

The JSON can hold arbitrary elements though. So you can add a field like comment: something I want to keep to your YAML and it'll get saved as an element in the JSON. That's a tenique that others who do a lot of widget development use.

I'm neither advocating for this nor denigrating it. Just point it out as an option some people use.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement An enhancement or new feature of the Core
Projects
None yet
Development

No branches or pull requests

7 participants