Unikname Development Report #16
Welcome to the 16th edition
Unikname rebranding from a developer point of view 💻
Impacts on the code base
First of all, the code base has been impacted. We’ve written 3 years of code with UNS in mind when building network products. UNS was everywhere and had different meanings: a blockchain, a token and also a domain. We named variables, constants, functions and even comments with UNS references. All these lines are not refactored because it would have been an horrible, tedious and boring work. For this part, we chose to keep actual code and add a section in our open-source project’s READMEs to explain the meaning of old terms. We couldn’t imagine new contributors fighting against meanings then just give up in front of difficulty. New code will be written with new terms and old namings will progressively disappear (that’s codebase natural evolution).
To be able to easily communicate and be called, our products have been deployed to dedicated domains. Before rebranding, we had 2 main domains: unikname.com and uns.network. We will progressively move our network products to a new domain: unikname.network. For the majority of products, the process was to update nameservers and add new domain DNS records, pointing out human names to products hosting services (like Netlify or CleverCloud). It was relatively quick depending on service provider TLS certificate generation and cache updates. To keep backward compatibility of already published and shared URLs, we’ve made a permanent HTTP redirections (301).
For our WordPress based landing pages, it was trickier because this CMS saves each resource (uploaded image or text file) in a database with the public URL in the name (very bad, isn’t it?). So it is everywhere in code, from pages to templates. Resulting in 404 not found images everywhere in pages. By chance WordPress CLI provides a search-replace command which simply substitutes a string by another in all database entries.
Some less visible URLs haven’t been changed yet because they’re less visible, more sensitive and called by some products that may not follow HTTP redirection. For example, it’s the case of https://api.uns.network which provides HTTP communications with our Unikname Network node and is called by all our products (SDK, explorer, CLI, …) and maybe unknown community tools. Both domains api.uns.network and api.unikname.network will coexist for a long time.
My Unikname PWA
My Unikname app is a Progressive Web App and PWAs are still a recent technology with heterogenous compatibility among browsers. Even if internal rebranding of app codebase has been successful (like colors, images, labels and so on), external rebranding is not so easy (and maybe impossible). Behind “external rebranding” we are talking about brand-specific configurations given to browsers through manifest.json. This file tells your browser that a website is a PWA and contains PWA specific configurations like name, description, theme color, background color, orientation, display mode and also icons to show when installed on your device homepage. What is painful is that upadting this information is not handled by browsers yet. It means that users with My Unikname app already installed on their home screen will see old icons until they decide to re-install the app. It’s bad for user experience but for now, we have our hands and feet tied.
Unikname Network node
Unikname Network protocol, based on Ark Ecosystem protocol is fueled by a protocol token. Before rebranding, our protocol token was called like the blockchain: UNS. But now, it’s called UNIK. Token name is “just” a configuration of the protocol even if it appears at several places (in APIs, in logs, in configuration files,…). This modification wasn’t our top priority because we could overwrite it into our clients (My Unikname app or Explorer ). Some changes will be available in the next node release.
In Unikname Network we also have another token, a non-fungible token previously called UNIK but now obviously named UNIKNAME. This name is visible at multiple places too: in API URLs, queries and responses, logs, database,… It’s a big deal to keep backward compatibility on this because we do not want to break existing calls but we also do not want to spend a lot of time trying to make them live together (because it has a little value for our users). We’re working on this question and it could be part of a future breaking release (or not).
UNS CLI is our command-line tool to manage your UniknameID from terminal or script files. This tool has been designed with old names in mind (UNS for protocol token and UNIK for NFT) and has been built under the network name (@uns/cli for download and uns for the command). Changing a CLI name is a tedious work, is valueless and breaks all existing uses of the CLI (think about automated tool running our CLI …). We think that our CLI should be used at a higher level, to help in Unikname management, to interact with the network, but also to set up unikname products like Unikname Connect. That’s why we envision creating a new command-line tool with more features and with a name closer to Unikname. You’ll be notified on time when this project will be launched. Until now, the current CLI won’t be rebranded and it will remain “as-is”.
A blessing in disguise
As you’ve seen lot of Unikname products have been impacted by this rebranding: MyUniknameApp, UniknameNetwork core, CLI, Resolver, Delegate website,…
But even if old terms are still present, brands unification should be simpler and should help us and community to build better products for the decentralized ecosystem.
How to deal with MongoDB migration scripts? 📈
In Unikname ecosystem, like the majority of web products, we’re working with a centralized database. We chose MongoDB because of its capacities, document-oriented model and large community.
Like in any other database system, we have schemas to model our data and prevent bad inputs. But what happens when your product evolves with new features? When you must handle new rules and new needs with backward compatibility?
To be more concrete on our needs: we’re working with RESTful private resources only accessible by their owner(s), administrators or anything else.
Because of SoC principle, this model and its value are extracted from a functional model (like our users or current resources).
It means that we don’t need to attach a role property on our user model, then set read/write rules on each role.
Instead we :
- Describe our CASBIN model,
- Dynamically fill with values based on HTTP requests (like on user creation we add default access rules for this user) then persist,
- Add a Nestjs guard on HTTP requests,
- Intercept HTTP requests and execute access control based on its data and ACL values.
- Now that you have more details on our needs, let us spot the main issue: how can we update CASBIN model when it already has values and also adds new resources in our restful API?
Like any other database system, MongoDB should run migration scripts to update data according to new schemas, but it doesn’t provide a native tool.
So we chose migrate-mongo community tool to do the job.
It’s really simple :
- you create a new directory for migration scripts
- add a config file
- write your migration script
- then add migration tool call into your deployment flow
Awesome, isn’t it? You can even split your migration script to improve readability when you have more and more models to update and migration becomes more and more complex. It’s cool, but what happens when the second or the third migration script fails ? You have to manually roll back every script in the right order to get back your previous data state. We wanted to automate that roll back flow to prevent human errors. So we’ve added the ability to automatically identify migrations script and their order, then when failure hook triggers, roll back every applied script in the reverse order. And now, the team is much calmer when an error occurs during database migration. Keeping the focus on why does it fail, instead of how to recover from the failure.
The feature is PR #331 of `migrate-mongo` project and should be made available to the community soon.
On migrate-mongo up, the tool will check up new scripts (based on the changelog document) and run them in sequence. If one fails, it will return an error (and cancel deployment if configured).
Then you will be able to call the rollback command migrate-mongo down –block which will call the down hook of each script to revert the database state.
👉 To know more, you can visit dedicated repo.
Meet Unikname team to get a demonstration and help integration in your project