Documentazione di Gambas
Application Repository
Come...
Componenti
Da fare
Documentazione Sviluppatori
Documenti
Indice del linguaggio
#Else
#Endif
#If
+INF
-INF
Abs
Access
ACos
ACosh
Alloc
AND
AND IF
Ang
APPEND
Array in linea
AS
Asc
ASin
ASinh
Asl
Asr
Assegnazione
ASSERT
ATan
ATan2
ATanh
Base64$
BChg
BClr
BEGINS
Bin$
Bool@
Boolean@
BREAK
BSet
BTst
BYREF
Byte@
CASE
CATCH
CBool
Cbr
CByte
CDate
Ceil
CFloat
CHGRP
CHMOD
Choose
CHOWN
Chr$
CInt
CLASS
CLong
CLOSE
Collezioni in linea
Comp
CONST
CONTINUE
Conv$
COPY
Cos
Cosh
Costanti
CPointer
CREATE
CREATE PRIVATE
CREATE STATIC
CShort
CSingle
CStr
CVariant
Date
Date@
DateAdd
DateDiff
Day
DConv$
DEBUG
DEC
DEFAULT
Deg
DFree
Dichiarazione di Array (o Matrice)
Dichiarazione di enumerazione
Dichiarazione di metodi
Dichiarazione di struttura
Dichiarazione di una proprietà
Dichiarazione di variabili locali
Dichiarazioni di costanti
Dichiarazioni di funzioni esterne
Dichiarazioni di variabili
DIM
Dir
DIV
DO
DOWNTO
EACH
ELSE
END
ENDIF
ENDS
END SELECT
END STRUCT
END WITH
ENUM
Eof
ERROR
ERROR TO
Etichette (Labels)
Eval
Even
EVENT
Eventi dichiarazioni
Eventi globali gestione
EXEC
Exist
Exp
Exp2
Exp10
Expm
EXPORT
EXTERN
FALSE
FAST
FINALLY
Fix
Float@
Floor
FLUSH
FOR
FOR EACH
Format$
Formati definiti dall'utente
Frac
Free
FromBase64$
FromUrl$
FUNCTION
Funzioni di localizzazione e traduzione
Gestione delle funzioni esterne
GOSUB
GOTO
Hex$
Hour
Html$
Hyp
IF
IIf
IN
INC
INCLUDE
INHERITS
INPUT
INPUT FROM
InStr
Int
Int@
Integer@
IS
IsAlnum
IsAscii
IsBlank
IsBoolean
IsDate
IsDigit
IsDir
IsFloat
IsHexa
IsInf
IsInteger
IsLCase
IsLetter
IsLong
IsLower
IsMissing
IsNaN
IsNull
IsNumber
IsPunct
IsSpace
IsUCase
IsUpper
KILL
LAST
LCase$
Left$
Len
LET
LIBRARY
LIKE
LINE INPUT
LINK
LOCK
Lof
Log
Log2
Log10
Logp
Long@
LOOP
Loop degli eventi
Lsl
Lsr
LTrim$
Mag
MATCH
Max
ME
MEMORY
Metodi di comparazione
Metodi speciali
Mid$
Min
Minute
MkBool$
MkBoolean$
MkByte$
MkDate$
MKDIR
MkFloat$
MkInt$
MkInteger$
MkLong$
MkPointer$
MkShort$
MkSingle$
MOD
Month
MOVE
NEW
NEXT
NOT
Now
NULL
Numeri complessi
Oct$
Odd
ON GOSUB
ON GOTO
OPEN
OPEN MEMORY
OPEN PIPE
OPEN STRING
Operatori aritmetici
Operatori di assegnamento
Operatori logici
Operatori ordine di valutazione
Operatori stringa
OPTIONAL
OR
OR IF
OUTPUT
OUTPUT TO
Percorsi di File e Directory
Pi
PIPE
Pointer@
PRINT
PRIVATE
PROCEDURE
PROPERTY
PUBLIC
QUIT
Quote$
Rad
RAISE
Rand
RANDOMIZE
Rappresentazione binaria dei dati
RDir
READ
Realloc
REPEAT
Replace$
RETURN
Right$
RInStr
RMDIR
Rnd
Rol
Ror
Round
RTrim$
Scan
SConv$
Second
SEEK
Seek
SELECT
Sgn
SHELL
Shell$
Shl
Short@
Shr
Sin
Single@
Sinh
SizeOf
SLEEP
Space$
Split
Sqr
Stat
STATIC
STEP
STOP
STOP EVENT
Str$
Str@
String$
String@
StrPtr
STRUCT
SUB
Subst$
SUPER
SWAP
Swap$
Tan
Tanh
Temp$
THEN
Time
Timer
Tipi di dati
TO
Tr$
Trim$
TRUE
TRY
TypeOf
UCase$
UnBase64$
UNLOCK
Unquote$
UNTIL
Url$
USE
Uso di parole chiave riservate come identificatori
Val
VarPtr
WAIT
WATCH
Week
WeekDay
WEND
WHILE
WITH
WRITE
XOR
Year
LEGGIMI
Lessico
Licenza dello Wiki
Messaggi di errore
Panoramica del linguaggio
Registrazione
Ultime modifiche
Wiki Manual

Tokenize

Tokens = Tokenize ( String [ , Identifiers , Strings , Operators , KeepSpace ] )

Since 3.21

Split a string into tokens and return them.

Arguments

  • String : the string to split.

  • Identifiers : a string of extra characters allowed in identifier tokens.

  • Strings : an array of strings, each string describing the limits of a string token.

  • Operators : an array of strings, each string representing an operator token.

  • KeepSpace : tell if space tokens are returned.

Return value

The tokens are returned as a string array.

Description

This function is a simple lexical parser that splits a string into tokens and return them as a string array made of following kind of tokens:

  • Space tokens

    A space token is made of successive space or tab characters.

  • Newline tokens

    A newline token is made of one newline character.

  • Number tokens

    A number token is made of successive digit characters.

  • Identifier tokens

    An identifier starts with a letter, and is made of any successive letter or digit or extra character specified in the Identifiers argument.

    If Identifiers is not specified, only letter and digits are allowed.

  • String tokens

    Each string of the Strings array describe the delimiters of a string token.

    • If the description is made of one character, then the initial and final delimiter are that character. And if two successive delimiter characters are encountered, only one character is kept, and it is not considered as an escape character anymore.

    • If the description is made of two characters, then the first one is the initial delimiter, and the second one the final delimiter. The final delimiter cannot be escaped.

    • If the description is made of three characters, then the first one is the initial delimiter, and the second one the final delimiter. The final delimiter can be escaped by using the third character.

    If Strings is not specified, then no string token is parsed.

    For example: ["\"", "''\\", "[]"] will parse as token strings everything enclosed by double quotes, single quote, and square brackets. The strings enclosed by double quotes will allow the "double quoting", those enclosed by single quotes will allow the ' character to be escaped with a backslash character, whereas those enclosed by square brackets will allow no escape.

  • Operator tokens

    The contents of the Operators argument is an array of the different strings that will be parsed as a unique token.

    As all characters that are not parsed as a space, newline, number, identifier or string token are returned as an single character token, the Operators should usually contains only operators made of multiple characters. For example, <=, >=, &&, and so on.

The tokens are parsed in the order of that description.

So if a token is parsed as an identifier, it cannot be parsed as an operator. In other words, if you specify something like "X->" in the Operators argument, it will never match, as "X" will be identified as an identifier first.

As all tokens are returned as strings, you can't really know what the type of token is. But it should not be actually relevant.

Examples

Print Tokenize("Return Subst((\"&1 MiB\"), FormatNumber(Size / 1048576))").Join(" _ ")
Return _ Subst _ ( _ ( _ " _ & _ 1 _ MiB _ " _ ) _ , _ FormatNumber _ ( _ Size _ / _ 1048576 _ ) _ )

Print Tokenize("Return Subst((\"&1 MiB\"), FormatNumber(Size / 1048576))",, ["\""]).Join(" _ ")
Return _ Subst _ ( _ ( _ "&1 MiB" _ ) _ , _ FormatNumber _ ( _ Size _ / _ 1048576 _ ) _ )

See also