Most languages exhibit characteristic statistics. This has grave consequences for encryption since this characteristic can be used for breaking the code. The invention describes a systematic coding which maps symbols from the text alphabet A in symbols from the picture alphabet B in such a manner that the B-weighted pictures, up to a predeterminable degree of accuracy, become a sequence of statistically independent equally distributed random variables. The coding begins with an initialisation phase during which a sequence of coding tables is constructed on the basis of the relevant frequencies of symbol tuples. These tables are then used in the subsequent coding phase for coding the individual symbols from A by using random decisions. The decoding always remains unambiguous. In comparison with the Huffman data compression, this coding is distinguished by requiring much less (logarithmic) storage space whilst retaining the same accuracy.