HttpClient and cold Observables

TL;DR: use obs.toPromise() with the Observable returned by the HttpClient library if you condiser the end-consumer will attach callbacks to the Promise before the API answers; otherwise, use .shareReplay(1) and subscribe directly a first time to the Observable.

HttpClient library, whether in NestJS or Angular, uses cold Observables.

A cold Observable will only “activate” (here: do the HTTP call) when it is subscribed to, for the simple reason that when it emits (here: when we receive response from the call), we want at least someone to listen, so that the result don’t go unheard straight to the void.

Yet, I find that in a lot of usecases involving API calls, the user which want to make the call will or will not deal with the result, depending on the situation (it could be updating a stock, which returns the resulting inventory, which we want to store somewhere or not). I’m talking about a situation where you’re developping a module to place calls, used by consumers services.

consumer <-> HTTP module <-> remote API

My first trial involved subscribing to the Observable directly in my HTTP module, before handing the Observable to the consumer.

const obs = httpClient.get(url);
obs.subscribe( () => {}, () => {} );
return obs;

Note that I provide two empty callbacks in .subscribe to avoid nuclear mole bomb (see previous post) ?

This works, but it works too much: if you observe API calls in the Network tab of your dev tools, you’ll see 2 API calls at once. Why that? Because the way it’s done, the cold Observable given by HttpClient will activate n times if you subscribe n times to it. So here, it’s fine if the consumer does not subscribe it, but possibly problematic if it does.

For that matter, there’s a nice Rxjs operator: .share(). (Btw, don’t let people who say it’s equivalent to .publish().refCount(): it’s not.)

const obs = httpClient.get(url).pipe( share() );
obs.subscribe( () => {}, () => {} );
return obs;

This operator will share the result of one call with all subscribers at the time of result coming: nice! But actually not enough. If the call is real fast, and the consumer takes time to subscribe, it will miss the first result and another call will be made (resulting again in two calls, which we really want to avoid).

So there’s another Rxjs operator for that: .shareReplay(1).

const obs = httpClient.get(url).pipe( shareReplay(1) );
obs.subscribe( () => {}, () => {} );
return obs;

This operator will share the last result it got (and HttpClient will only ever emit one result, so we’re fine with that) with any present or future subscribers. Awesome !

Now all that seems a bit far-fetched. There’s a way simpler solution, but note that it’s not appropriate if you think the consumer will attach callbacks later on, after the API has sent results (in which case, error-handling seems to be simply impossible). It’s to use the Observable .toPromise() function. Why that? Because due to the nature of the 2 structures (Observable and Promise), .toPromise() has to subscribe to the Observable, activating it in the same time, so we don’t have to take care of forcing the call anymore. Then, you handle a Promise to the consumer. This Promise will receive the first emission of the Observable (and there will be maximum one, so that’s fine), and keep it to show it to any callback the consumer will attach to .then().

const obs = httpClient.get(url);
return obs.toPromise();

Only caveat: if the API call gets into an error, and if the consumer does not handle error on the Promise before that, then you’re doomed, and the error is likely to bubble up and crash the process. At the time of writing, I found no way to handle this situation, except by interecepting the error with the Observable .catchError() operator and providing a default replacement value, which is probably not the best on the consumer side…

Error-handling with Observables in Node.js

A quick post to note two important things when it comes to Node.js and Observables.

Unhandled exceptions kill the server

One of the paramount aspect of webservices on Node.js to remember is: any unhandled exceptions will blow up the webserver. It won’t just return ‘500’ to the user. It will, but also blow up the webserver.

This leads to 2 important things to consider:

  • a webservice on Node.JS must be monitored by a Process Manager, and automatically relaunched when failing. This, of course, is ok with your service, but you designed it to fail-fast, in accordance to the 12-factor principles.
  • you should handle as many errors as possible and do appropriate actions (logging, error-management notifying, replacement-value providing, etc.), so that only unforeseen exceptions will actually blow up the server. Which is actually part of Node.js philosophy: let unhandled exceptions kill the server, because if such exceptions occured, the server might be in a unstable state now; so better kill and respawn it.

Observables’ subscribe: the nuclear time-bomb in your code

It’s easy to deal with a received Observable and only care about the positive outcomes and “leaving for later” the handling of possible but “unlikely” errors the Observable might crash on.

obs.subscribe( result => { ... } );

If you wrote above code, you’re a punk, or you’re just unaware of the nuclear mole you just set up. You heard somewhere, especially in Node.js event-based programming: “always handle errors” but you didn’t handle possible errors in the subscribe. Mayyybe you got to the impression you were safe thanks to some larger-scope error-handling in your code (of course, you’re not a fool, you didn’t surround above subscribe with try… catch… knowing that the subscription itself can not fail), in such manner:

import { Observable, throwError } from "rxjs";

/**
 * Returns a Promise which returns a result,
 * only if this one matches some conditions.
 * NOTE: tons of ways to do that better, this is
 * only for the example
 */
async function showMustGoOn(): Promise<any> {
  return new Promise((resolve, reject) => {
    const obs: Observable<any> = throwError("You'll never get me if not in the subscribe error handler!");
    obs.subscribe(
      (result) => {
        if (result.type === 5) { // some conditions...
          resolve(result);
        }
      }
    );
  });
}

/**
 * A function that could give the illusion that any error
 * here or deeper will be caught and handled.
 */
async function supposedlyErrorShieldingFunction() {
  // Bring it on, synchronous errors!
  // It really does not make sense to do this here, because
  // showMustGoOn is asynchronous (and we don't use 'await')
  // and .then and .catch can't fail (their asynchronous callbacks can)
  try {
    const result: Promise<any> = showMustGoOn();
    result.then(
      (value) => { /*...*/ },
      (err) => { /* I'll catch you here if that's what it takes! */ }
    ).catch(
      (err) => { /* Otherwise, I'm totally cornering you here! */ }
    );
  } catch (e) {
    /* There's basically no way you escape me now. As true as I'm a punk. */
  }
}

supposedlyErrorShieldingFunction();

In above code, you’re trying to catch any error that bubbles up in the synchronous flow and within the returned Promise, but this does not address the rule: “handle all exceptions, especially in Observables and Promises” because the asynchronous error happening within the Observable is not handled. This error will bubble up unattended to the root of the process and make it crash.

It’s not a recommended way, but for the purpose of experiment, you could add this block of code anywhere:

// This catches all uncaught exception in the application
process.on('uncaughtException', error => {
  console.log('uncaughtException', error);
});

/* // This would catch all unattended Promise rejection
process.on('unhandledRejection', (error: any) => {
  console.log('unhandledRejection', error.message);
});
 */

which would print:

uncaughtException You'll never get me if not in the subscribe handler!

Those global catchers are not recommended, especially in production, because of above-mentionned philosophy: let unhandled exceptions kill the server, because if such exceptions occured, the server might be in a unstable state now; so better kill and respawn it.

Error-handling with Promises (in NestJS Controllers)

This is more a memo concrete example of how Promises work when it comes to error-handling. It also explains some of NestJS Controller behaviour when it comes to occuring errors.

Below code and comments are self-explanatory.

@Controller('myRoute')
export class AlarmController {
@Get()
public aNestControllerEndPoint() {
// this snippet could be in any block of code,
// but some of below comments tells about
// Nest http response in different cases.
try {
return myAsyncFunc()
.then(
onfulfilled => {
// We enter here if myAsyncFunc returns a resolved Promise
// Any exception happening in this block will be caught in the
// .catch block below.
console.log('success!', onfulfilled);
return onfulfilled;
},
onrejected => {
console.error("3 - Request fails with error", onrejected);
// We enter here if :
// - myAsyncFunc returns a rejected Promise
// - an unhandled exception occurs in myAsyncFunc
// We have to return a Promise: here, we return an auto-rejected Promise
// which will be caught in the .catch block below.
// Also in this particular path, we decide to generate an exception
// of a particular kind.
return Promise.reject( new ConflictException() );
// or simply:
// throw new ConflictException();
}
)
.catch(
(reason: any) => {
// We enter here:
// - if an exception occured in the 'onfulfilled' or 'onrejected' block above
// - if the 'onfulfilled' or 'onrejected' block above returned a rejected Promise
// - if an exception occured in myAsyncFunc and no 'onrejected' block
// like above exists
// - if myAsyncFunc returns a rejected Promise and no 'onrejected' block
// like above exists
console.error("2 - Request fails with error", reason);
throw reason;
// or:
// return Promise.reject( reason );
}
);
} catch (e) {
// Because we used 'await', we'll enter this block if:
// - an exception occurs in myAsyncFunc and is not handled
// above by 'onrejected' or .catch above
// - myAsyncFunc returns a rejected Promise which is not handled
// above by 'onrejected' or .catch above
// - an exception occurs in above blocks ('onfulfilled', 'onrejected' or .catch)
// and is not handled by subsequent blocks
// - above blocks ('onfulfilled', 'onrejected' or .catch) returns a rejected Promise
// not handled by subsequent blocks
// If we hadn't used 'await', the server would instanly replied '200',
// and meanwhile the async process would be on-going.
console.error("1 - Request fails with error: ", e);
// Letting an unhandled exception go will cause one of two things:
// - NestJS interceptor will catch it, and if it's recognised HttpException,
// it will generate an appropriate HTTP answer (with appropriate HTTP code)
// example with: throw new HttpException('error message', 400); // import { HttpException } from '@nestjs/common';
// example with: throw new ServiceUnavailableException(); // import { ServiceUnavailableException } from '@nestjs/common';
// - or an error 500 will be generate, and due to the nature of Nodejs
// regarding unhandled exceptions, we server will crash (and has to be
// braught back up by an Process Manager such as PM2, docker-composer, etc.)
// example with: throw new Error("Unrecognised exception");
throw e;
// WARNING: if we don't re-throw an exception here and returns nothing,
// NestJS will generate a 20x response with '' as body.
}
}
}

Own NPM repository for shared code

Splitting our services into microservices had led us to wonder: what do we do with the shared code? Especially code for basic functions, such as logging or error handling.

Because the code of each microservice lives in its own project folder, and we obviously can’t replicate our shared code in each project folder, we have to place it in a separate shared/common/utils folder outside of any project folder. But how to access an external directory from our projects then? Solutions with npm –link proved to be uneasy to set up, and were causing problems with namespace aliases. Plus, if your common code evolves with even a minor breaking-change, and all your projects are linked to it that way, you have to make the required adaptations in all your projects before you can go on working on them. Not so flexible.

In the end, we decided to go the long, but righteous way: to store such common code in a special project of its own, which we would build as versioned npm package, store on a private repository, and access from any of our projects. It was actually pretty easy to do. Let’s check the few steps to get there:

Host a private NPM repository

Seems like all the thing seen from afar, but it can actually be a breeze, provided you have a server at hand which can run Docker.

We used Verdaccio, which requires no setup at all. Check out https://hub.docker.com/r/verdaccio/verdaccio/ for details. Publishing restrictions on the generated repository are not covered here. Simply make sure you mount a volume to store published packages, as in docker-compose.yml:

volumes:
- /path/to/storage:/verdaccio/storage

Create the shared code library project

We use a NestJs project, compiled with Typescript. Your shared code should reside in /src as usual. Now for the configuration of the library building and publishing:

Notable options within tsconfig.json:

{
"compilerOptions": {
"module": "commonjs",
"declaration": true,
...
"outDir": "./dist",
"rootDir": "./src",
"baseUrl": "./",
"noLib": false
},
"include": ["src/**/*.ts"],
"exclude": ["node_modules"]
}

tconfig.build.json

{
"extends": "./tsconfig.json",
"exclude": ["node_modules", "test", "dist", "**/*spec.ts"]
}

package.json

{
"name": "@ceneau/backoffice",
"version": "1.0.0",
...
"main": "dist/index.js",
"files": [
"dist/**/*",
"*.md"
],
"scripts": {
"build": "tsc -p tsconfig.build.json",
...
},
"publishConfig": {
"access": "restricted"
},
...
}

index.ts should be a barrel file at the root of src/ whose goal is to re-export every objects you want to export in the package. index.ts will be transpiled and become dist/index.js, the root of your exported objects.

With that, the library is ready for building and publishing, but we still have to tell it where to store the generated package, as well as few optional behaviors to follow on each build.

.npmrc (at project’s root) – even if you use yarn

@ceneau:registry=http://url.to.private.npm.repository/

.yarnrc (at project’s root) – obviously if you use yarn to build

version-commit-hooks false
version-git-tag false
version-git-message "Ceneau backoffice common - v%s"

These options handles the generation of tags for each build, and associated git message. Check documentation for more details.

Now, to publish a new version of the shared code:

npm adduser --registry http://url.to.private.npm.repository/

This, if you created your npm repository with no publishing restriction, allows you to create a user within it. You’ll be ask for details and password.

yarn publish

Builds and publishes the package. Hooray !

Use your common library in projects

Now that we’re here, the only remaining trick is to tell your project where to search for your common packages, in addition to the other default npm repositories.

.npmrc (at project’s root) – even if you use yarn

@ceneau:registry=http://url.to.private.npm.repository/

package.json

{
"dependencies": {
"@ceneau/backoffice": "^1",
...
},
...
}

You should be good to go !

Bonus: recover from wrong npm config

If you accidentally changed your global npm registry, your next npm/yarn commands are likely to fail (right now with a 500 error coming from I have no idea). My NPM config was showing some concerning bits:

> npm config list
; userconfig C:\Users\L.npmrc
registry = "http://npm-repo.x.ceneau.com/"

Quick! Hurry and set back the default registry up !

> npm set registry https://registry.npmjs.org/

Running NestJs logic code fast

My 2012 computer, fully loaded with i7 processor and RAM *at the time*, is still quite powerful and very reliable, so I don’t want to change it (ecologically speaking: I don’t want to use more rare resources from the Earth which also come with misery and violence in producing countries; get informed!).
Yet, working with NestJS has first appeared very tedious and extremely long to run, up to 2 minutes.

Avoiding launching the web-server everytime

The first thing I wanted to avoid was to launch the application webserver every time I changed something within the application logic (which is, 99.999% of the time, right?). So I created a script for the case I wanted to develop-and-try :

console.log("[" + (new Date()).toISOString() + "] Script launched.");
import { NestFactory } from '@nestjs/core';
import * as moment from 'moment';
import 'moment/locale/pt-br';
import { AlarmController } from '@ceneau-wa/controller/alarm/alarm.controller';
import { AlarmModule } from '@ceneau-wa/controller/alarm/alarm.module';
import { SymptomOccurrenceDto } from '@ceneau-dto/activity/symptom-occurrence.dto';
console.log("[" + (new Date()).toISOString() + "] Imports done.");

async function run() {
  moment.locale('utc');
  // Prepare controllers and services
  const app = await NestFactory.create(AlarmModule);
  const controller = app.get(AlarmController) as AlarmController;

  // Prepare data to process
  const dto = new SymptomOccurrenceDto();
  dto.entityType = 10;
  dto.entityId = 81;
  dto.positive = true;
  dto.startingDate = 1554214235000;
  dto.endingDate = 1554214237000;
  dto.elementCount = 1;

  // Launch actual logic
  console.log("[" + (new Date()).toISOString() + "] Application starts.");
  await controller.launch(dto);
  console.log("[" + (new Date()).toISOString() + "] Application ends.");
}
run();

This works fine. I create the application context via NestFactory.create, giving it the module I need, and retrieve the component I want to use (here, the AlarmController). Works well!

Minimizing execution time

By using following command to launch above script :

node --nolazy -r ts-node/register -r tsconfig-paths/register src/sendOccurrence.ts 
(don't use this command ! see below)

execution times were terrible, between 90 to 120 seconds.

Surely, it’s the dealing with NestFactory and internal process which causes an horrible overhead, so I tried to not use NestFactory and instead created all services myself with all dependant services, but I was gaining only 2 seconds. I added some profiling time prints, as indicated in above code, and could see most of the time was spent during imports !

Finally, it’s the use of ts-node (install it globally) which saved me, being way faster :

ts-node -T -r tsconfig-paths/register src/sendOccurrence.ts
(-T: "does not check for type errors", saves another 2 seconds in execution)

Execution dropped to 17-20 seconds. Hooray !

Take note that some people totally struggle while using ts-node too, like this guy and his 9-minute start time…

Debugging the process

Unfortunaly, I couldn’t find a way to minimize launching time while debugging the script indicated above. I’m still using a VSCode launch.json with such configuration :

      {
          "type": "node",
          "request": "launch",
          "name": "Slooooow - Debug SendOccurrence",
          "args": ["${workspaceFolder}/src/sendOccurrence.ts"],
          "runtimeArgs": ["--nolazy", "-r", "ts-node/register", "-r", "tsconfig-paths/register"],
          "sourceMaps": true,
          "cwd": "${workspaceRoot}",
          "protocol": "inspector",
          "stopOnEntry": false
      }

And launching time is still 2 minutes…

Note: from VS Code versions after 1.46.0, the internal javascript debugger has an auto-attach mode which seems sweet, and maybe solves this. Turn option “Debug: Toggle Auto Attach” on, and run a npm/yarn command: your breakpoints should be hit. More details here:
https://github.com/microsoft/vscode-js-debug#debug-nodejs-processes-in-the-terminal.

Debugging Typescript with Visual Studio Code and module aliasing

Note: from VS Code versions after 1.46.0, the internal javascript debugger has an auto-attach mode which seems sweet, and maybe make all the text below deprecated. Turn option “Debug: Toggle Auto Attach” on, and run a npm/yarn command: your breakpoints should be hit. More details here:
https://github.com/microsoft/vscode-js-debug#debug-nodejs-processes-in-the-terminal.

Our policy is to go more and more towards Javascript, for client-side applications as well as backoffice jobs, and with that, to embrace technologies such as NodeJS and language add-ons such as TypeScript. Frameworks like Angular (web apps) and NestJS (server-side apps) provide a full range of ready-to-use schematics and release-builds optimizations. Now we want to use an IDE which leverages those tools with powerful IntelliSense and debugging possibilities. IntelliJ IDEA comes as a choice of reference but is not for every budget, so we’ll give a go for Visual Code Studio. This post is about my findings on how to debug a NestJS app (NodeJS app following Angular policies) with VSCode.

Debugging NestJS apps

To avoid ridiculous import paths ‘../../../../adir/amodule’, module aliasing is used in the form of ‘@myalias/amodule’, improving readability and ease in moving code files. The aliases are set in the tsconfig.json file at the root of the project, needing a ‘baseDir’ attribute and a set of ‘paths’ (both to set under the ‘compilerOptions’). VSCode IntelliSense knows this trick perfectly and reacts accordingly, but when it comes to debugging, the default launch.json created by VSCode struggles at matching the paths, and quickly results in :

Debugger listening on ws://127.0.0.1:31189/821980d6-4e30-4360-8ac5-6b47af4faced
For help, see: https://nodejs.org/en/docs/inspectorDebugger attached.
Error: Cannot find module ‘@myalias/app.service’
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:582:15)
at Function.Module._load (internal/modules/cjs/loader.js:508:25)
at Module.require (internal/modules/cjs/loader.js:637:17)

The trick here is to indicate explicitly that the tsconfig-paths module must be used, just like it’s indicated in the default package.json ‘start’ script :
“start”: “ts-node -r tsconfig-paths/register src/main.ts”
In VSCode launch.json file, it’s done by :
“runtimeArgs”: [“–nolazy”, “-r”, “ts-node/register”, “-r”, “tsconfig-paths/register”]

To wrap it up, VSCode launch.json file should look like :

{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
“version”: “0.2.0”,
“configurations”: [
{
“type”: “node”,
“request”: “launch”,
“name”: “Debug Nest Framework”,
“args”: [“${workspaceFolder}/src/main.ts”],
“runtimeArgs”: [“–nolazy”, “-r”, “ts-node/register”, “-r”, “tsconfig-paths/register”],
“sourceMaps”: true,
“cwd”: “${workspaceRoot}”,
“protocol”: “inspector”
}
]
}

Debugging NestJS tests

It’s very very useful to be able to debug the tests written for the application – the same tests which will be automatically run within Continuous Integration. NestJS uses the Jest framework to write tests (close to the Jasmine framework), and here are the launch.json configuration to add to debug all tests or just the test currently shown in the editor :

{
    ...
    "configurations": [
        ...,
        {      
          "type": "node",
          "request": "launch",
          "name": "Debug test - all",
          "program": "${workspaceFolder}/node_modules/.bin/jest",
          "args": ["--runInBand"],
          "console": "integratedTerminal",
          "internalConsoleOptions": "neverOpen",
          "disableOptimisticBPs": true,
          "windows": {
            "program": "${workspaceFolder}/node_modules/jest/bin/jest",
          }
        },
        {
          "type": "node",
          "request": "launch",
          "name": "Debug test - current file",
          "program": "${workspaceFolder}/node_modules/.bin/jest",
          "args": [
            "${fileBasenameNoExtension}",
            "--config",
            "jest.config.js"
          ],
          "console": "integratedTerminal",
          "internalConsoleOptions": "neverOpen",
          "disableOptimisticBPs": true,
          "windows": {
            "program": "${workspaceFolder}/node_modules/jest/bin/jest",
          }
        }

    }
}