Gambas Documentation
Aperçu du Langage
À traduire
Code Snippets
Comment faire ...
Compilation et installation
Composants
Controls pictures
Derniers changements
Dépôt d'applications
Documentation de l'Environnement de développement
Documentation des développeurs
Documents
Indenter
Index de tous les Documents
Index du langage
#Else
#Endif
#If
+INF
-INF
Abs
Access
ACos
ACosh
Affectations
Alloc
AND
AND IF
Ang
APPEND
AS
Asc
ASin
ASinh
Asl
Asr
Assign
ATan
ATan2
ATanh
BChg
BClr
BEGINS
Bin$
Bool@
Boolean@
Boucle d'évènements
BREAK
BSet
BTst
BYREF
Byte@
CASE
CATCH
CBool
Cbr
CByte
CDate
Ceil
CFloat
Chemins de Fichiers & Répertoires
CHGRP
CHMOD
Choose
CHOWN
Chr$
CInt
CLASS
CLong
CLOSE
Collections Inline
Comp
CONST
Constantes du langage
CONTINUE
Conv$
COPY
Cos
Cosh
CREATE
CREATE PRIVATE
CREATE STATIC
CShort
CSingle
CStr
CVariant
Date
DateAdd
DateDiff
Day
DConv$
DEBUG
DEC
DEFAULT
Deg
Déclaration d'énumération
Déclaration d'évènement
Déclaration de constante
Déclaration de fonctions externes
Déclaration de méthode
Déclaration de propriété
Déclaration de structure
Déclaration de tableau
Déclaration de variable locale
Déclaration de variables
Déclaration Spéciale
DFree
DIM
Dir
DIV
DO
DOWNTO
ELSE
END
ENDIF
ENDS
END SELECT
END STRUCT
END WITH
ENUM
Eof
ERROR
ERROR TO
Eval
Even
EVENT
EXEC
Exist
Exp
Exp2
Exp10
Expm
EXPORT
EXTERN
External Function Management
FALSE
FINALLY
Fix
Float@
Floor
FLUSH
Fonctions de Localisation et Traduction
FOR
FOR EACH
Format$
Formats définis par l'utilisateur
Frac
Free
FUNCTION
Gestionnaires d'évènements globaux
GOTO
Hex$
Hour
Html$
Hyp
IF
IIf
IN
INC
INHERITS
INPUT
INPUT FROM
InStr
Int
Int@
Integer@
IS
IsAscii
IsBlank
IsBoolean
IsDate
IsDigit
IsDir
IsFloat
IsHexa
IsInf
IsInteger
IsLCase
IsLetter
IsLong
IsMissing
IsNaN
IsNull
IsNumber
IsObject
IsPunct
IsSpace
IsUCase
KILL
Labels
LAST
LCase$
Left$
Len
LET
LIBRARY
LIKE
LINE INPUT
LINK
LOCK
Lof
Log
Log2
Log10
Logp
Long@
LOOP
Lsl
Lsr
LTrim$
Mag
Max
ME
Mem$
MEMORY
Méthodes de comparaison
Méthodes spéciales
Mid$
Min
Minute
MkBool$#4
MkBoolean$
MkByte$#4
MkDate$
MKDIR
MkFloat$
MkInt$
MkInteger$
MkLong$
MkPointer$
MkShort$
MkSingle$
MOD
Month
MOVE
NEW
New
NEXT
NOT
Now
NULL
Odd
OPEN
Opérateurs arithmétiques
Opérateurs d'affectation
Opérateurs de chaînes de caractères
Opérateurs logiques
OPTIONAL
OR
Ordre d'évaluation des opérateurs
OR IF
OUTPUT
OUTPUT TO
Pi
PIPE
Pointer@
PRINT
PRIVATE
PROCEDURE
PROPERTY
PUBLIC
QUIT
Quote$
Rad
RAISE
RANDOMIZE
RDir
READ
Realloc
REPEAT
Replace$
Représentation binaire des données
RETURN
Right$
RInStr
RMDIR
Rnd
Rol
Ror
Round
RTrim$
Scan
SConv$
Second
SEEK
Seek
SELECT
Sgn
SHELL
Shell$
Shl
Short@
Shr
Sin
Single@
Sinh
SizeOf
SLEEP
Space$
Split
Sqr
Stat
STATIC
STEP
STOP
STOP EVENT
Str$
String$
String@
StrPtr
STRUCT
SUB
Subst$
SUPER
SWAP
Swap$
Tableaux intégrés
Tan
Tanh
Temp$
THEN
Time
Timer
TO
Tr$
Trim$
TRUE
TRY
TypeOf
Types de données
UCase$
UNLOCK
Unquote$
UNTIL
USE
Utilisation des mots réservés comme identificateur
Val
VarPtr
WAIT
WATCH
Week
WeekDay
WEND
WHILE
WITH
WRITE
XOR
Year
Lexique
LISEZ-MOI
Manuel du wiki
Messages d'erreur
Tutoriels
Wiki License

Tokenize

Tokens = Tokenize ( String [ , Identifiers , Strings , Operators , KeepSpace ] )

Depuis 3.21

Split a string into tokens and return them.

Arguments

  • String : the string to split.

  • Identifiers : a string of extra characters allowed in identifier tokens.

  • Strings : an array of strings, each string describing the limits of a string token.

  • Operators : an array of strings, each string representing an operator token.

  • KeepSpace : tell if space tokens are returned.

Return value

The tokens are returned as a string array.

Description

This function is a simple lexical parser that splits a string into tokens and return them as a string array made of following kind of tokens:

  • Space tokens

    A space token is made of successive space or tab characters.

  • Newline tokens

    A newline token is made of one newline character.

  • Number tokens

    A number token is made of successive digit characters.

  • Identifier tokens

    An identifier starts with a letter, and is made of any successive letter or digit or extra character specified in the Identifiers argument.

    If Identifiers is not specified, only letter and digits are allowed.

  • String tokens

    Each string of the Strings array describe the delimiters of a string token.

    • If the description is made of one character, then the initial and final delimiter are that character. And if two successive delimiter characters are encountered, only one character is kept, and it is not considered as an escape character anymore.

    • If the description is made of two characters, then the first one is the initial delimiter, and the second one the final delimiter. The final delimiter cannot be escaped.

    • If the description is made of three characters, then the first one is the initial delimiter, and the second one the final delimiter. The final delimiter can be escaped by using the third character.

    If Strings is not specified, then no string token is parsed.

    For example: ["\"", "''\\", "[]"] will parse as token strings everything enclosed by double quotes, single quote, and square brackets. The strings enclosed by double quotes will allow the "double quoting", those enclosed by single quotes will allow the ' character to be escaped with a backslash character, whereas those enclosed by square brackets will allow no escape.

  • Operator tokens

    The contents of the Operators argument is an array of the different strings that will be parsed as a unique token.

    As all characters that are not parsed as a space, newline, number, identifier or string token are returned as an single character token, the Operators should usually contains only operators made of multiple characters. For example, <=, >=, &&, and so on.

The tokens are parsed in the order of that description.

So if a token is parsed as an identifier, it cannot be parsed as an operator. In other words, if you specify something like "X->" in the Operators argument, it will never match, as "X" will be identified as an identifier first.

As all tokens are returned as strings, you can't really know what the type of token is. But it should not be actually relevant.

Examples

Print Tokenize("Return Subst((\"&1 MiB\"), FormatNumber(Size / 1048576))").Join(" _ ")
Return _ Subst _ ( _ ( _ " _ & _ 1 _ MiB _ " _ ) _ , _ FormatNumber _ ( _ Size _ / _ 1048576 _ ) _ )

Print Tokenize("Return Subst((\"&1 MiB\"), FormatNumber(Size / 1048576))",, ["\""]).Join(" _ ")
Return _ Subst _ ( _ ( _ "&1 MiB" _ ) _ , _ FormatNumber _ ( _ Size _ / _ 1048576 _ ) _ )

See also