Package Registry npm Gitlab

Lors de la migration des services sur cluster Scaleway, nous avons également migré nos packages npm, précédemment hébergés dans un container Velero dans l’ancien cluster, vers le “Package Registry” proposé gratuitement par gitlab.com .

Ce billet porte sur l’utilisation de ce registre.

Accès au Registry

Le “projet source“, hébergeant le Package Registry pour les packages Ceneau communs, est ‘bo-common’. Dans la suite, ‘source_project_id’ fait référence à l’id du projet source sur Gitlab.

Poste de développement

Lorsqu’on travaille sur un projet applicatif (exchange, maintenance, etc.), il est nécessaire que le poste de développement puisse atteindre le registry en lecture.

Lorsqu’on travaille sur le projet ‘bo-common’, pour pouvoir publier de nouvelles version de ce package, il est nécessaire que le poste de développement puisse atteindre le registry en écriture.

  • Création d’un Deploy Token par développeur

Chaque développeur Ceneau doit donc créer un Deploy Token associé par poste de développement (afin de pouvoir les désactiver en cas de compromission), avec les droits d’écriture (write_package_registry) sur le registry.
Où?
gitlab.com > projet source > Settings > Repository > Deploy Tokens

  • Configuration de l’authentification npm sur poste de développement

Npm peut être configuré globalement sur le poste de développement avec cette commande (à utiliser même si yarn est employé pour les autres opérations courantes):

npm config set -- https://gitlab.com/api/v4/projects/<source_project_id>/packages/npm/:_authToken=<deploy_token_password>

Configuration npm par projet (service consommateur)

Ceci ne concerne que les projets utilisant les packages npm.

  • Configuration de cible de dépôt par projet

Le fichier .npmrc cible le répertoire de dépôt en fonction du scope npm (ici, ‘@ceneau’ est le scope des packages Ceneau).
Ce fichier permet au développeur de rapatrier les packages Ceneau dans son service, et est également utilisé dans le build de l’image Docker ainsi que dans les pipelines CI.
Le fichier .npmrc doit contenir :

@ceneau:registry=https://gitlab.com/api/v4/projects/<source_project_id>/packages/npm/
  • Dockerfile

Les Dockerfiles des projets utilisant npm doivent utiliser le .npmrc du projet pour cibler le dépôt de package, mais également s’authentifier auprès du dépôt npm (comme pour le poste de développement; mais ici, le build de l’image va s’opérer dans un environnement différent propre à Docker).

Exemple de configuration dans un Dockerfile, utilisant une variable de build:

ARG PACKAGE_REGISTRY_ACCESS_TOKEN
...
COPY --chown=node:node .npmrc .
RUN echo >> .npmrc
RUN echo "//gitlab.com/api/v4/projects/<source_project_id>/packages/npm/:_authToken=${PACKAGE_REGISTRY_ACCESS_TOKEN}" >> .npmrc
RUN yarn install
RUN rm -f .npmrc

NOTE: on ajoute toujours une nouvelle ligne à .npmrc au cas où un retour à la ligne n’existe pas déjà dedans.
NOTE: bien dégager le .npmrc qui contient le Deploy Token après l’avoir utilisé.

IMPORTANT: il était avant courant de builder manuellement l’image Docker d’un service sur son poste de développement ou un serveur. Cette pratique est maintenant caduque. Il faudrait pour cela passer au build la valeur du Deploy Token créé pour le poste de développement en argument, qu’il ne faut surtout pas laisser trainer dans les fichiers de dev, ou pire, le committer.
Si cela était cependant nécessaire, il est préconisé de mettre le Deploy Token du poste de développement en variable d’environnement de son OS (Windows, Linux), nommée CENEAU_PACKAGE_REGISTRY_ACCESS_TOKEN afin de pouvoir l’utiliser et le passer en argument de build à Docker (–build-arg PACKAGE_REGISTRY_ACCESS_TOKEN=”${CENEAU_PACKAGE_REGISTRY_ACCESS_TOKEN}”).

  • Pipeline CI

Enfin, les pipelines Gitlab CI `.gitlab-ci.yml` de chaque projet doivent être adaptés pour accéder au registry.
Pour ce faire, Gitlab met à disposition une variable spéciale $CI_JOB_TOKEN, générée automatiquement à chaque lancement de pipeline et dont la durée de vie est limitée à l’exécution du pipeline. En utilisant ce jeton spécial, il est possible pour un projet d’interagir avec un autre projet du même groupe (à condition que ce projet cible l’y autorise, voir point suivant).

Passage du token en argument de build Docker:

docker build ... --build-arg PACKAGE_REGISTRY_ACCESS_TOKEN="${CI_JOB_TOKEN}"

ATTENTION! cet argument est à passer à toutes les occurrences de docker build dans le fichier du pipeline.

Utilisation du token dans la phase pipeline de test

Si le pipeline inclut une phase de test (avec ‘yarn install’ et ‘yarn test:ci’ par exemple), ces tests s’exécutent de nouveau dans un nouvel environnement (autre que le poste de développement ou le build Docker), auquel on doit de nouveau passer le token, afin que ‘yarn install’ puisse rapatrier les packages Ceneau. Dans la phase de test, mettre ce bloc afin de compléter le `.npmrc` utilisé lors de cette phase:

  before_script:
    # Add access token authentication to npm configuration to reach the package registry (on another project)
    - echo >> .npmrc
    - echo "//gitlab.com/api/v4/projects/<source_project_id>/packages/npm/:_authToken=${CI_JOB_TOKEN}" >> .npmrc
  • Autorisation d’accès au niveau du projet source

Il est impératif que le projet source autorise le projet consommateur à utiliser ses CI_JOB_TOKENs pour accéder à ses ressources (sans quoi un 404 sera retourné pour signifier qu’il n’en a pas l’autorisation).

Sur gitlab.com > projet source > Settings > CI/CD > Token Access :
– bien vérifier que l’option “Limit access to this project” est activée (sécurité)
– dans “Allow CI job tokens from the following projects to access this project”, ajouter le projet consommateur en utilisant son chemin (exemple: pour un projet sur https://gitlab.com/masociete/mongroupe/monprojet, indiquer “masociete/mongroupe/monprojet”).

NestJS: validation des inputs DTO

Petite note rapide décrivant comment /bien/ valider les DTO entrantes pour les endpoints NestJS. En effet, il est facile d’oublier la bonne annotation et d’accepter des DTO non-valides.

Types primitifs

  @IsNumber() pace: number;
  @IsString() timezone: string;
  @IsBoolean() isActive: boolean;

Attributs facultatifs

  @IsBoolean() @IsOptional() isActive?: boolean;

Tableaux

  @IsArray() @ValidateNested({ each: true }) @Type(() => MyItemType) myArray: Array<MyItemType>;

Objets

  @IsNotEmptyObject() @ValidateNested() @Type(() => MyObjectType) myObject: MyObjectType;

Notes:
– @IsNotEmptyObject() is very much required: if absent, the absence of the whole ‘myObject’ in the DTO will be considered valid
– { each: true } in @ValidateNested is not adequate for Objects

Procédure de patch sur une version en prod

Cette procédure est conçue pour le cas suivant :

  • une version x.y.z est déployée en production
  • cette version contient un bug
  • des commits de développement ont été pushés sur le repository entre la version en prod et la date actuelle (si ce n’est pas le cas, il suffit de commiter le fix et de faire une nouvelle version patch standard)

La version en prod x.y.z correspond nécessairement à un tag du source-control (sinon, y’a eu un raté!).

git checkout app-name_x.y.z

A partir de là, en dev, on est iso-prod: le bug doit pouvoir être reproduit, et le fix nécessaire développé.
Créer une nouvelle branche “x.y” (sans le z, qui justement correspond aux versions patch).

git branch x.y
git checkout x.y

Développer le fix et commiter/pusher (ca le fera sur la branche).

Tagger une nouvelle version avec un numéro de patch supérieur: le pipeline CI devrait construire et déployer l’image corrigée.


Après le déploiement du fix, il convient de copier le commit de fix sur la branche ‘main’ afin que celle-ci profite également de la correction.

Un moyen simple est le cherry-picking. Noter le hash du commit (visible via git log, ou sur Gitlab), puis, en repassant sur la main:

git checkout main
git cherry-pick <commit-hash>
git push


Main API entrypoints (Symfony)

You want to add a new entrypoint to the main-api? Here’s some ropes!

In what controller?

Controllers are groupes in bundles in Symfony. Bundles were primarily used to split the database (as one Bundle controls one database). Now the bundles responsibilities are as follows:

  • APIBundle: contains the endpoints for external client users or external systems.
  • ConsoleBundle: contains code for the Symfony Console command (doctrine updates of the schemes, Ceneau updates of basic data, etc.)
  • DailyBundle: contains the endpoints for the legacy Perl backoffice scripts
  • MeasureBundle: only contains definition of the Measure DB schema
  • RESTBundle: contains the endpoints for the newer NestJS backoffice webservices
  • SVBundle: contains the endpoints for the front application (V1 and Angular apps); within it, the APIController (and all controllers in SVBundle/Controller/API) serves dedicated data for the Angular apps.
  • WIPBundle: used to replace the routes in WIP screens during V1 deployments

Securing the access

Symfony secures its access via the app/config/security.yml. Within that file, the routes are secured by role or by IP (the routes themselves are defined via the app/config/routing.yml).

Two main routes can be used when adding an entrypoint:

  • /app/data is used to serve data to Angular apps: it requires a Symfony authentication and leads to the SVBundle.
  • /intern/rest requires no authentication: it’s reserved to communication between automatic processes (legacy Perl, NestJS webservices) and will be filtered by IP for security. These routes lead to the RESTBundle.

Within your code file, you can them add a security layer with PHP annotation just above your method definition via:

@Secure(roles=”ROLE_ADMIN”)

use JMS\SecurityExtraBundle\Annotation\Secure;
/**
 * @Secure(roles="ROLE_ADMIN")
 */
public function mySecuredMethod() { ...

This will require the user to be authenticated via Symfony with the ROLE_ADMIN role.

Returning data to a Nestjs service

NestJS services require static data in the simplest format, therefore a REST API is indicated. Use the FOSRestBundle to define the HTTP Verb and the route:

use FOS\RestBundle\Controller\Annotations as Rest;
use Sensio\Bundle\FrameworkExtraBundle\Configuration\ParamConverter;

  /**
   * @Rest\Put("/save")
   * @ParamConverter("dtoIn", converter="fos_rest.request_body")
   */
  public function saveMeasurements(MeasurementSaveBulkInputDto $dtoIn)

Sensio ParamConverter allows use to automatically decypher the input DTO in a class you defined. It will generate a 400 if the DTO is invalid. Validation within the target class uses JMS serialization (check examples in code).

You can now return data with 2 ways:

  • create a output DTO class with JMS serialization annotations within it; then create an instance of this class in your controller and simply return it to return its JSON version to the consumer service (check MeasurementController::getBulkMeasurements)
  • form your own simple data structure (with array() and keys in it) in your Controller and simply return it: it will be turned into a JSON object (check ExchangeController:: getEntitiesInformation)

Returning data to an Angular application

Angular apps typically use the Store as a reactive sole source of truth. The reactive structure of the entities in the Store allows for great reactivity within Angular templates. If you intend to store your data in the Store, then your entrypoint can return a dedicated structure, proper to feed the Store.

Define your route with Symfony @Route annotation:

  /**
   * @Route("/my/route/{entityId}", defaults={"_format"="json"}, name="_process_entity")
   */
  public function processEntity($entityId)

Then, check the following helping class/methods:

  • EntitiesMap: create a new EntitiesMap and add the entities you want to return via the addEntity/addEntities methods.
  • ApiUtilityService::getJson: from an EntitiesMap, generate a JSON object using the customed serialization, coded by each entity’s svSerialize/svSerializeDates/ createAssociations methods.
  • ApiUtilityService::renderResponse: offers a json template to embed different things, including json entitys for the Store.

Error: Can’t bind to ‘ngModel’ since it isn’t a known property of

A confusing error message, which may occur if you’re using a SubComponent (declared by SubComponentModule) in a ParentComponent (declared in a ParentComponentModule) and:

  • you forgot to add SubComponentModule in ParentComponentModule‘s imports (then it’s not accessible by SubComponent!)
  • you forgot to add the SubComponent in the
    SubComponentModule ‘s exports (then it’s not accessible outside the Component declared by this very module)

Angular Component: unit testing

If Angular services are quite easy to tests (the main difficulty being /what to test/), Component testing deals with more challenges. This post sums up some key points.

Please note that we are using ngx-speculoos to do so, which offers some friendly syntaxic sugars to write the tests. Please go through the presentation of the package before reading forth.

What to test ?

When testing the Component-Under-Test, here are some suggestions about what to test:

  • the presence of HTMLElement or SubComponents in the generated template (potentially, depending on the CUT inputs, some user actions, etc.): for this, syntaxic sugars in the CTester (ngx-speculoos) are welcome.
  • the Services methods called by the CUT (at initialization, on a button push, etc.): this is done in a usual manner, via Services mock
  • the inputs provided to the SubComponent: the subcomponents are not fully tested here (they are tested in their own unit tests), we only tests that we provide them with the right inputs
  • the behaviour of the CUT when the subcomponent outputs: here also, we need to emulate the output emissions of the mocked subcomponents

Component-Under-Test lifecycle hooks

ngOnInit()

When writing tests, you might want to perform some actions before the actual initialization of the CUT (before its ngOnInit() is called), such as: setting up spies on related services, setting up CUT inputs, etc.

It appears the Component actually initializes with the first tester.detectChanges(); occurrence (tester is a CTester, see ngx-speculoos). For desambiguation purposes, we suggest to create a initComponent function, to call at the appropriate time in your tests:

  function initComponent(): void {
    tester.detectChanges();
  }

ngOnChanges()

Unfortunately, ngOnChanges will not be called every time you update the CUT inputs. This is something to be done manually, with laborious code such as:

component.ngOnChanges({
  quantity: new SimpleChange(null, { id: 22 }, false)
});
// instead of
// component.quantity = { id: 22 };

SubComponents references

You’ll have to mock all sub-components used by the Component-Under-Test (the tests will actually run correctly without them, only displaying console errors and warnings). Yet, you’ll soon want to check at the inputs of these sub-components (to check the CUT has provided them with the right data), as well as emulate their outputs (to check the CUT behaves appropriately in these cases).

The fixture created by TestBed helps us get a reference to any subcomponents, and fortunately ngx-speculoos keeps that functionality. In the definition of your CTester, you can implement this kind of accessor:

class CTester extends ComponentTester<StatisticsPanelComponent> {
  // ...
  get selectPeriodComponent(): TimeAbsoluteComponentMock {
    const debugElement = this.fixture.debugElement.query(By.directive(TimeAbsoluteComponentMock));
    // this select a component by its Component name
    // to select by other means, please refer to the DebugElement query documentation
    return debugElement.componentInstance as TimeAbsoluteComponentMock;
  }
  // ...
}

Then in your test:

it('should give its child appropriate inputs', () => {
  // ... test setup ...
  expect(tester.selectPeriodComponent.suggestTime).toBeFalse();
}
it('should behave appropriately when its child outputs', () => {
  tester.selectPeriodComponent.periodChanged.next({type: PeriodType.Absolute});
  expect(......);
}

Mocking Pipes

To mock pipes (especially custom pipes), one solution is to override the mocked pipe prototype directly. It’s a bit of a brutal solution, but it works.

First, create a mock of the pipe, which we will declare in the TestBed options, so it’s accessible from the CUT:

@Pipe({ name: 'svUserSetting', pure: false })
export class SvUserSettingPipeMock {
  transform(obj: unknown): unknown | null {
    return null;
  }
}

The mock will directly be used by the CUT, as it’s the only directives matching the name ‘svUserSetting’ declared within TestBed. Now, we’ll hijack its behavior to test different scenarii:

beforeEach(() => { // ... or directly in a test
  spyOn(SvUserSettingPipeMock.prototype, 'transform').and.returnValue(true);
});


Swarm + Traefik: dreadful pitfalls

Swarm is an exquisitely simple container orchestrator, and allied with Traefik for the reverse proxy + load-balancing + certificate generation, you’ve got yourself a powerful solution in a matter of hours or even minutes.

Yet, the structure of the docker-compose file used by Swarm, and the need for each web-exposed service (using a domain name) to join a Traefik network (in the Swarm definition) has often kept me perplexed, even suspicious. And one should be, as potentially dreadful pitfalls await!

Non isolation

If you want your services to be reachable via domain names, they will have to join the Swarm public network you created for Traefik, so that Traefik can proxy the requests to it when the time comes.

In other words, they will have to share the same network. This means that all exposed services can access one another with their service name (the one defined in the docker compose file), while one could have expected for services in a given stack to be isolated from other services from other stacks, if not specified differently.

That could be bearable, but then what happens if two stacks each have their own service, with the same name?

Using same service names in different stacks

It is totally allowed to use the same service name in different stacks, such as a ‘db’ service in a PHP+MySQL+phpMyAdmin stack named ‘api-server’, and another ‘db’ service in a NestJS+MariaDB stack named ‘ws-calendar’. Nobody yields any warning during the stacks deployment, and the command docker service ls will even show different full-named services, here: api-server_db and ws-calendar_db. Great!

Yet, it’s not. Indeed, if they share the same network (as it’s the case if these two services are exposed to the web via domain names and Traefik), Swarm will consider they’re the same, replicated services, basing itself only on the short name of each service, and not on its full name. As a result, it will start load-balancing between them, which might cause disastrous results, or at the very least not the ones you were expecting.

One should therefore be very cautious about the naming of the stacks’ services, which should be distinct all over your solution.

It especially applies if one would have hoped (naively) to manage services of different environments (production, staging, ci) with the same Traefik, relying only on different stacks with different service configurations. If the service names are not cautiously distinct, staging processes might very well end up modifying production databases just like that. It would be more prudent to create different Traefik networks with different Traefik instances (necessarily on different servers to avoid ports conflicts), or better, to manage totally different Swarm clusters per environment.

RouteReuseStrategy: advantages and pitfalls

The Angular philosophy is to destroy a Component as soon as it’s not in use anymore, meaning: not in the DOM anymore, and this happens quickly when RouterOutlet is used to display such or such content, depending on the URL route. The Component will be created again, if the user navigates back to a route that contains it.

Recreating Components can be trouble (depending on the usecase)

This is useful for large applications (to reduce the memory) where Components are kept simple or when they will not be met again often enough to care about their destruction/reconstruction time.

But for applications where the user is expected to go back and forth the same Components, this can create several uncomfortable side-effects for the user experience:

  • a clipping may result, especially if some somewhat heavy initialization process occurs in the Component (= in ngOnInit); though, this might reveal a design smell. For instance, all data fetching (typically, from the API server) and calculation should be done and kept in a sidecar service, whose responsability will be to handle the ‘data state’ related to the Component, which can fetch it from there very quickly when initializing again.
  • the ‘UI state’ of the Component is lost, and that might go against the user experience. By ‘UI state’ (versus ‘data state’), I mean all the little interface details, not related to any business data, that evolves when the user plays with the Component. Examples: a selection in a Select node; the beginning of some text in a TextArea node; the expanding of a collapsable widget; the scrolling in a large list; the navigation in a map. For form-related elements, it would be possible to store the Form state in a sidecar, but for the rest of the myriads of possible little things, it wouldn’t be possible (too much effort) to store them all and restore them when the Component is met again, and the user would face a ‘default UI state’ each time. Again, this depends on the complexity and nature of the Components, and if we care at all about the UI to reset everytime.
  • when depending on third-party Component, we may still face an initialization time, resulting in clippings and annoying waitings. It will most likely be very difficult to save the internal ‘UI state’ of such Components, and for some of them, it would need some unwanted hacking. A third-party map component, for instance, is likely to request its tiles again if destroyed and recreated.

Angular RouteReuseStrategy to the rescue!

Angular offers a mechanism to keep a Component on the side when navigating away and use it again – unchanged – when the same route is met again, called RouteReuseStrategy. The set up is not sexy; it feels more like a toolbox than an integrated part of the framework, but it’s not difficult, and soon enough you can specify the different routes of your applications, whose related Components will be preserved for later (note that you define reuseable routes; you can’t flag a Component itself as reusable). And that’s it! The navigation is lightning flash again, your Components are met again as they were, hooray!

Notice: be wary though that your Component, when kept aside for later, is still very much alive. Its subscription will still be active, so be sure that your Component is not subscribed to anything exterior to itself (which should be very much the case anyway!).

RouteReuseStrategy: caveats

Sadly, no solution is ever perfect straight away.

In my design, there’s a little piece missing from the RouteReuseStrategy (SO users still agrees that it feels more like a bug), which is: some Component lifecycle hooks, to be alerted of its detaching (when it’s kept aside for later) and reattaching (reused).

Again, it might not be a problem for a lot of usecases. For mine, I couldn’t think about a proper alternative to what I have.

Description:

  • My Component depends on a part of the URL to identify which entity to display. For instance, the route /station/14 is linked to StationComponent (via RouterOutlet) and indicates the Station #14 should be displayed within it.
  • The Component should initialize depending on the route (and not some external state), because we want the same behavior whether the user has come to this route by navigating the application, or by launching this URL directly. For that purpose, subscribing to ActivatedRoute.params seems the most logical way, as this Observable emits some ParamMap, in which the interesting bit of the URL (defined in the routes given to the RouterModule) is directly proposed as paramMap.get(‘stationId’). Awesome!
  • Some other Components in my application rely on the displayed Station (if any). These Components can not subscribe to ActivatedRoute.params as they’re not connected to the route (from the RouterModule point of view). They could listen to URL changes by other means, but in a messy way, most likely very coupled to the route definitions. To resolve this, simple: we use a state service to keep the information of the displayed Station, on which my other Components can subscribe, and which is modified by my StationComponent whenever it detects a change in the route parameters.
  • Now if the user navigates to another Component, say, a Digest, then my state service will be alerted that now a Digest is displayed, and not a Station anymore. If he navigates back to a Station – one different than the first – then StationComponent is reused, it detects the change in the route parameters and notifies the state service that now a Station is displayed, this very new Station. Good!
  • BUT! If the user navigates away and come back to the same Station, then StationComponent is reused, but this time it does not see any changes in the route parameters (if it was 14, from /station/14 before, and now it’s still /station/14, then no changes have occured, as the observed ActivatedRoute is only the one related to StationComponent). No event is emitted by ActivatedRoute.params, and the notifying of the state service is not done, resulting in a unsynced state within my application. Curses !

That’s where I lack of an elegant way to resolve this. To me, the Component should be able to react to its reattaching, during which it would have an extra opportunity to notify the state service.

But RouteReuseStrategy does not call any Component lifecycle hooks. : /

RouteReuseStrategy’s missing Component Lifecyle hooks: a fix

A workaround (found on SO) consists on replacing the default RouterOutlet by a custom one, which will trigger some lifecycle hooks in its related Component.

sv-router-outlet.directive.ts

import { ComponentRef, Directive } from '@angular/core';
import { ActivatedRoute, RouterOutlet } from '@angular/router';

@Directive({
  selector: 'sv-router-outlet',
})
export class SvRouterOutletDirective extends RouterOutlet {

  detach(): ComponentRef<any> {
    const instance: any = this.component;
    if (instance && typeof instance.onDetach === 'function') {
      instance.onDetach();
    }
    return super.detach();
  }

  attach(ref: ComponentRef<any>, activatedRoute: ActivatedRoute): void {
    super.attach(ref, activatedRoute);
    if (ref.instance && typeof ref.instance.onAttach === 'function') {
      ref.instance.onAttach(ref, activatedRoute);
    }
  }
}

To use, define a Module for this new outlet, and load the module wherever you’ll replace the default <router-outlet> by <sv-router-outlet>.

sv-router-outlet.module.ts

import { NgModule } from '@angular/core';
import { SvRouterOutletDirective } from './sv-router-outlet.directive';

@NgModule({
  declarations: [
    SvRouterOutletDirective,
  ],
  exports: [
    SvRouterOutletDirective,
  ],
})
export class SvRouterOutletModule { }

Git: avoid commits of fdescribe() and fit() (jasmine/jest tests)

While writing tests, some frameworks offers the possibility to “focus” on a particular test or particular suite by running only them and not the all of them.

While this is useful when writing the tests of some component, it can be a dangerous tool when the focus is baked within the tests themselves, such as Jasmine‘s special instructions fdescribe and fit. Indeed, if these instructions make it through the source-control, it will render the whole Continuous Testing inefficient (as part of the Continuous Integration), as some errors could pass undetected by the filtered test files.

To avoid to break Continuous Testing, we’ll add a pre-flight check before any git push.

To set-up the check, we’ll create some directories at the root of our git folder, and a pre-commit file (the check file will itself be pushed in the source control):

misc/git-hooks/pre-commit

#!/bin/sh

# Hooks to do particular checks before allowing a commit.
# Configure Git to use this file for pre-commit checks:
#   git config core.hooksPath $GIT_DIR/../misc/git-hooks/

STATUS=0

# Checking that 'fdescribe(' or 'fit(' (focus test in Angular/NestJS) are not committed by mistake.
MATCHES=$(git --no-pager diff --staged -G'[fit|fdescribe]\(' -U0 --word-diff | grep -P '\-\]\{\+(fdescribe|fit)' | wc -l)
if [ $MATCHES -gt 0 ]
then
    echo "You forgot to remove all 'fit(' or 'fdescribe(' from your test files."
    STATUS=1
fi

exit $STATUS

Check this file in the source control, and have all developper pull it in time.

Each developper must then configurer its git setting by entering the following command (the exact path might have to be adapted, depending on configurations):

git config core.hooksPath $GIT_DIR/../misc/git-hooks/

CI is now protected!

Angular Elements

To allow new developments to be made in Angular and used in the Plain-Old-Javascript V1 application, we’ve worked with something Angular calls “Elements”. Angular Elements builds as WebComponents, a standard package importable in any page and fully responsible for its rendering and internal logic.

Modular Design

As their goal is to be injected in a foreign application, the Elements must be very autonomous. This helped a lot to design actual modules, with very few dependencies. The first Elements created (Automatic Export Parameter Setting and Phyc Export) don’t even use the Store, which is too coupled to the rest of the whole Superviseur V2 application, and handles their own data privately. A way should be thought, in needed, to use light-weight version of the Store within the Elements, and for them to be able to exchange data with a Store, if any.

Development organization

The easiest way to develop-test-debug the Elements is to develop them directly within an Angular environment. To accelerate the process, we don’t add them to the whole Superviseur but instead create mini-development-applications. In addition to the files of the Element itself (contained in, for example, MyFeatureModule), we create/edit these files:

  • angular.json: declaration of the new dev-app
  • package.json: npm command to start the dev-app
  • tsconfig.elements-dev.ts: adding the dev-app start file (note: there could also be one tsconfig file per AngularElement)
  • dev-myFeature.ts: a bootstrap start file
  • appMyFeature.module.ts: a module including MyFeatureModule but also some other modules we want to use within the Angular context, such as BootstrapModule and UserMessageModule.

This allows to test the application with ease in the Angular context and code most of the code.

When done, we want to build the WebComponents from the Angular Elements. Note that we’ll produce only one package containing all the Components we want to inject in the old application V1. To do so, we’ve created a new Angular project called ‘elements’ in angular.json with associated build commands in package.json. In src/elements, lie:

  • elements.ts: the project start file
  • elements.module.ts: in which we have to add our new Elements and define the new HTML tag for this one
  • concatenate-elements.js: a handy script to move generated files and concatenate them in the output directory (script used within package.json commands)
  • test.html: a simple plain-old page which imports the generated files, in which we want to add the new defined HTML tag for our Component

Note that test.html must be called from your web server (example: http://localhost/angular/src/elements/test.html) and not directly (not: file:///D:/Ceneau/angular/src/elements/test.html) for the API server to be reached properly !

With all that in mind, we’re set to go.

Inputs, Outputs

WebComponents can certainly handle Inputs and Outputs, just as an Angular Component can. But we’re in plain HTML, so inputs will simply given as simple HTML attribute. Note that they have to be kebab-case and not camelCase (and that the translation is automatically done under the hood by the generated WebComponent)! So if my Angular Component have an @Input() clientId: number, the HTML tag to summon it would be:

  <sv-aeps-pilot-ccgst id=”aepspilot” client-id=”8″></sv-aeps-pilot-ccgst>

If your input must change, I’m not too sure the changing of the attribute will be detected by the Components… As an alternative, we can totally get a reference on the Components itself and call one of its methods:

      var component = document.getElementById("aepspilot");
      comment.clientId = 2;

Note that you must use camelCase now !

For the output, to subscribe to them from a plain-old page:

var node = document.getElementById('myFeature');
node.addEventListener("messageInfo", function(event) {
  var message = event.detail;
  // ...
});

By convention, our Elements have 3 output emitters ( EventEmitter<string>) handled by the old application:

  • messageInfo
  • messageWarning
  • messageError

Limitation: Dev vs Prod

Certain parameters, such as the root URL of the API server, changes drastically between the dev environment and the staging/production/etc. The way it’s done now, parameters are included in the generated Component package, so a package generated in dev will only work in dev, and so on. As for now, we only commit the production package in the source control, along with the V1 files which import it.

Limitation: Internationalization (i18n)

I struggled with my first Component package size when I tried to include i18n in it, and reverted to use localized string directly in my components at the time. Now, Angular 9 has come with a new i18n system, and maybe it would be easy to use with Angular Elements.

Limitation: External CSS

The fact that the resulting WebComponent will be injected in an existing application does not protect this component from the default CSS rules applied by the user agent, which as we know can very a great deal from one browser to another (it should be protected from the global CSS styling of the existing application, though, if the encapsulation works well). Therefore, it is paramount to test the Component in-situ, meaning in the target application, and correct the Component CSS accordingly. The use of CSS resetter would also be a good practice within the Component, as this article describes: https://blog.jiayihu.net/css-resets-in-shadow-dom/.

Limitation: Error handling

Warning: I want to throw Exceptions in my service, to catch them in my Controller and display a user message accordingly. I tried to do so with a custom Error class, as follows:

 
export enum ExportPhycErrorType {
  SensorCodeCanNotByEmpty = 1,
  SiteMeteoCodeCanNotBeEmpty = 2,
  StationCodeCanNotBeEmpty = 3,
  StationHasActiveMeteoQuantityButNoMeteoCode = 4,
  StationHasActiveCorrelatedQuantityButNoStationCode = 5,
  OnlyOneCorrelatedAllowed = 6,
  OnlyOneMeteoAllowed = 7,
}
export class ExportPhycError extends Error {
type: ExportPhycErrorType;  
constructor( errorType: ExportPhycErrorType ) 
{    
super();
this.type = errorType;
}
}

In my service:

return throwError( new ExportPhycError( ExportPhycErrorType.OnlyOneMeteoAllowed ) ); 

In my controller:

obs.subscribe(
()=>{
//...
      }, (err) => {
        if (err instanceof ExportPhycError) {
          let msg;
          switch (err.type) {
            case ExportPhycErrorType.SensorCodeCanNotByEmpty:
              msg = 'Une grandeur hydrométrique doit avoir un code capteur valide si son export est activé.';
              break;
}
//...
}
});

This works well within an Angular application, as well as when an Angular application uses the generated Web Component. However, it does not work within an plain old JS app like V1. Indeed, the ‘err’ passed during the error handling is a callstack, and does not match ExportPhycError at all.

A replacement solution is to not use ExportPhycError and directly pass a ExportPhycErrorType.xxx (a number) to throwError(), and switch on the number value.