-
Posts
7,103 -
Joined
-
Days Won
88
Content Type
Forums
Downloads
Forum Articles
Events
Everything posted by TheDcoder
-
My guess is that RetroArch listens directly to the hardware on a lower level, so the virtual key events sent by AutoIt are useless. If this is true, your best bet is to use a virtual keyboard driver which emulates a physical keyboard. No idea if such a thing exists in the wild yet. Also, I don't think RetroArch exposes an interface which UIAutomation could use.
-
Techniques for multi-lingual GUI design
TheDcoder replied to TheDcoder's topic in AutoIt GUI Help and Support
Indeed, no way to get around that. But really, if we are concerned about security there isn't much that can be done as the binary itself can be modified to change the strings. Another advantage of external sources is that it makes it possible to just download the languages which are needed, as opposed to all supported languages, which would essentially be bloat. -
Techniques for multi-lingual GUI design
TheDcoder replied to TheDcoder's topic in AutoIt GUI Help and Support
@jchd Thanks for your alternate method of using a 2D array instead of an 1D approach like myself. Some people might prefer a 1D approach as it makes the expression shorter ($l[$L_EN][$L_FOO] vs $l[$L_FOO]) @water You have raised some very good points, especially the point about GUI layout. @argumentum Absolutely doable, that is what I originally wanted to do, but decided against that to make things simpler (in the short term). There is an excellent UDF called AU3Text by @dany which does exactly this, and much more. It also contains a version of Ini functions which are capable of properly handling unicode input I think. Sadly the demo gave me a bunch of syntax error when I attempted to run it in the latest AutoIt -
Techniques for multi-lingual GUI design
TheDcoder replied to TheDcoder's topic in AutoIt GUI Help and Support
Maybe, I did not give it much thought I am working on my own sloppy system right now, which looks like this: Global Enum $L_DUMMY, _ $L_TITLE, _ $L_FOO, _ $L_BAR, _ $L_DUMMY2 Global Const $L_EN[5] = ["", _ "Hello World", _ "Foo", _ "Bar", _ "" _ ] Global Const $L_HI[5] = ["", _ "नमस्कार दुनिआ", _ "गु", _ "गाली भरो मादर***", _ "" _ ] Global $l = $L_HI -
Hello everyone! It has been a very long time since I made a post in this forum I am working on a freelance project which requires me to create a multi-lingual GUI, and I could think of a few ways to do that. But I want to know what you guys have in store for this, please share all of your awesome tips and hack for creating multi-lingual GUIs. If this thread gets enough submissions, maybe we can make this into a meta-thread listing all of the techniques. Thank you for the submissions in advance! Regards, TheDcoder.
-
This definitely sounds like a scam, and I have no idea about the user as I have never seen them. Was this user back from 2008 too? I did, and it is pretty amazing that my post was able to attract old members and make them post their first post here. The first bit is indeed a good coincidence, but they part about motivation can easily be explained (as I did, in my previous post). @bakdlpzptk looks legitimate to me, they haven't made any spam post and their query is fully legitimate... not to mention the fact that they have had their account from 2008, not something a spammer is likely to have. Any comments on this @bakdlpzptk?
-
I agree, compile to .a3x and use something like a shortcut or a batch script to act as a launcher which will call AutoIt.exe
-
Nice, good luck! Do post any issues that you encounter during compilation, I will try to help I find that to be very strange, why would a spammer have accounts back from 2008? and why would they post in a legitimate thread such as mine before posting spam? And I see that @bakdlpzptk is still here, no spam posts in their account. bakdlpzptk might be simply inspired by what was said by avikdhupar, and decided to ask. I do wonder what kind of spam was posted... maybe their computers were infected with spamware
-
The pre-built binary that I uploaded cannot be run on RPi as it uses a different processor architecture (ARM). So you will have to compile the code yourself, you just need a C compiler and CMake, then you are on your way to compiling the code. Here are some simple commands that should get you started: $ git clone https://github.com/DcodingTheWeb/EasyCodeIt.git $ cd EasyCodeIt $ mkdir build && cd build $ cmake .. $ make I don't see the post that you quoted, has it been removed? Mac is currently not a target because I don't have any Apple devices, I have never used them in my life. That being said, it should be easy enough to compile the current version for MacOS, and in the future it might be easy enough to have a separate MacOS fork maintained by 3rd-party developers with mac experience. I also welcome hardware donations
-
@argumentum You are right, JSON is just a potential format to output data, and I agree that the tokens are probably never going to see the light of the day... so JSON is pretty low on the priority list. However I will eventually add it as an output option for the final parsed source tree but that is far in the future.
-
Thanks for testing and for the advice @argumentum, I wanted to quickly put out something that people can try, so this is not really a proper release, that is why you found a lack of any instructions in the downloads. Next time I will definitely put some more effort into it Yeah, I can completely understand, I find this method of outputting data awkward too, but I couldn't come up with anything better without adding more dependencies... originally I wanted the program to output the data in the JSON format, but since that requires me to use a library, I decided against that. Maybe in the future I will include JSON output Right now the program will just print out all of the tokens in my own DIY format. Also, you might find the token type "Word" a bit confusing, this is because I use the same type of token for both keywords (like If, While, Do, For, Break, Switch etc.) and functions (MsgBox) as they technically share the same syntax. During further parsing, these words will be properly split into functions and keywords. You are pretty close, there are many names for it in *nix land, but the most technically accurate term is shell script, because it is a script which is ran by a shell. A shell is nothing but a program which takes commands from the user in a terminal. Windows is a bit of a special case, because it has cmd.exe which acts both as a virtual terminal emulator (the black window) and as the command shell, atleast that is what I think is happening in Windows. In Linux we have a more obvious separation, we have different programs for the terminal and different shell programs that we can use according to our preferences. The overwhelming majority in Linux use the bash shell, so you are not really wrong Thanks!
-
Hi @jpm! It is a privilege to have you commenting here Indeed, currently I am using a generic Token structure, it is just meant to make scanning the tokens easier for the next step in the parser. I will make specialized Token structures for each token type which will contain the primitive data. I have not added support for floats yet, partly because I have rarely seen them being used in scripts. I will add support for it if someone asks for it though, right now I am concentrating on getting a simple and functioning proof-of-concept interpreter running, as opposed to a full-featured one. We can build on the PoC to add more features and eventually support most of them in AutoIt I have given some thought to this, but at this point it is just up in the air, I am using standard C functions (which are locale-sensitive) to identify the types of characters (along with some hardcoded characters), but in the future I will eventually use a robust UTF-8 text processor. Thank you!
-
I just finished adding support for all of the leftover types of tokens, and now I have a fully functional tokenizer! The latest code is available on GitHub, and I have also uploaded the latest binary builds here so that you guys can test and give me feedback: There are two files, one each for Windows and Linux, and you guys are smart enough to figure out which is for which Right now the interface is very simple, just download the binary and supply it an .au3 file as the first command-line argument, and it will print out all the tokens. Please report any unknown token errors which occur in valid scripts! Have fun, TheDcoder.
-
-
- autoit
- implementation
-
(and 2 more)
Tagged with:
-
@rcmaehl Not likely soon, I am busy with other freelance projects right now (as well as a few other personal things), and it will take a bit of time before I have a fully working interpreter... even then I would most likely just do a zip release without an installer Thanks for showing your interest, I will definitely let you and everyone know when I have an alpha build
-
Hello again, I apologize for the delay, life got in the way again... as it always does Today I made some good progress, I redesigned my code to handle one token at a time, instead of all tokens at once, this has made things simpler. I now have a tangible scanner framework, and I have added support for quite a few types of tokens! enum TokenType { TOK_UNKNOWN, TOK_WHITESPACE, TOK_COMMENT, TOK_DIRECTIVE, TOK_NUMBER, TOK_WORD, TOK_OPERATOR, TOK_BRACKET, TOK_COMMA, }; I still have to add support for macros, variables, strings etc. but that shouldn't be very hard. I also forgot to mention that handling multi-line comments turns out to be not as simple as I thought, it requires special handling and it has unique behavior, I believe it is the only multi-line (pre-processor) directive. Anyway, here is the complete code of the parser: /* * This file is part of EasyCodeIt. * * Copyright (C) 2020 TheDcoder <[email protected]> * * EasyCodeIt is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <https://www.gnu.org/licenses/>. */ #include <ctype.h> #include <stdbool.h> #include <stdio.h> #include "parse.h" #include "utils.h" const char CHR_COMMENT = ';'; const char CHR_DIRECTIVE = '#'; const char CHR_COMMA = ','; char CHRSET_WHITESPACE[] = {' ', '\t', '\n'}; char CHRSET_OPERATOR[] = { '+', '-', '*', '/', '^', '&', '=', '<', '>', '?', ':', }; char CHRSET_OPERATOR_EQUABLE[] = {'+', '-', '*', '/', '^', '&', '='}; char CHRSET_BRACKET[] = {'(', ')'}; struct TokenCharMapElem { enum TokenType type; union { const char chr; const char *chr_arr; }; }; static void print_token(struct Token *token) { puts("---### TOKEN ###---"); char *token_type; switch (token->type) { case TOK_UNKNOWN: token_type = "Unknown"; break; case TOK_WHITESPACE: token_type = "Whitespace"; break; case TOK_COMMENT: token_type = "Comment"; break; case TOK_DIRECTIVE: token_type = "Directive"; break; case TOK_NUMBER: token_type = "Number"; break; case TOK_WORD: token_type = "Word"; break; case TOK_OPERATOR: token_type = "Operator"; break; case TOK_BRACKET: token_type = "Bracket"; break; case TOK_COMMA: token_type = "Comma"; break; default: token_type = "Unnamed"; break; } fputs("Type: ", stdout); puts(token_type); fputs("Data: ", stdout); for (size_t c = 0; c < token->data_len; c++) putchar(token->data[c]); putchar('\n'); } void parse(char *code) { while (true) { struct Token token = token_get(code, &code); if (!code) break; if (token.type != TOK_WHITESPACE) print_token(&token); if (token.type == TOK_UNKNOWN) die("!!! Unknown token encountered !!!"); } return; } struct Token token_get(char *code, char **next) { struct Token token = { .type = TOK_UNKNOWN, .data = NULL, .data_len = 0, }; size_t length; // Identify the token if (length = scan_string(code, char_is_whitespace)) { // Whitespace token.type = TOK_WHITESPACE; token.data = code; token.data_len = length; } else if (*code == CHR_COMMENT || *code == CHR_DIRECTIVE) { // Comments and Directives token.type = *code == CHR_COMMENT ? TOK_COMMENT : TOK_DIRECTIVE; token.data = code; token.data_len = scan_string(code, char_is_not_eol); } else if (length = scan_string(code, char_is_num)){ // Numbers token.type = TOK_NUMBER; token.data = code; token.data_len = length; } else if (length = scan_string(code, char_is_alphanum)){ // Words token.type = TOK_WORD; token.data = code; token.data_len = length; } else if (char_is_opsym(*code)) { // Operator token.type = TOK_OPERATOR; token.data = code; // Include the trailing `=` if possible token.data_len = code[1] == '=' && chrcmp(*code, CHRSET_OPERATOR_EQUABLE, sizeof CHRSET_OPERATOR_EQUABLE) ? 2 : 1; } else if (char_is_bracket(*code)) { // Bracket (Parenthesis) token.type = TOK_BRACKET; token.data = code; token.data_len = 1; } else if (*code == CHR_COMMA) { // Comma token.type = TOK_COMMA; token.data = code; token.data_len = 1; } else { // Unknown token.data = code; token.data_len = 1; } // Set the next code *next = *code == '\0' ? NULL : code + token.data_len; // Return the token return token; } size_t scan_string(char *str, bool (cmpfunc)(char)) { size_t len = 0; while (true) { if (!cmpfunc(*str)) break; ++len; ++str; } return len; } bool char_is_whitespace(char chr) { return chrcmp(chr, CHRSET_WHITESPACE, sizeof CHRSET_WHITESPACE); } bool char_is_alpha(char chr) { return isalpha(chr); } bool char_is_num(char chr) { return isdigit(chr); } bool char_is_alphanum(char chr) { return char_is_alpha(chr) || char_is_num(chr); } bool char_is_opsym(char chr) { return chrcmp(chr, CHRSET_OPERATOR, sizeof CHRSET_OPERATOR); } bool char_is_bracket(char chr) { return chrcmp(chr, CHRSET_BRACKET, sizeof CHRSET_BRACKET); } bool char_is_not_eol(char chr) { return chr != '\n' && chr != '\0'; } Example output from my test: J:\Projects\EasyCodeIt\build_win>eci test.au3 ---### TOKEN ###--- Type: Directive Data: #include <Motivation.au3> ---### TOKEN ###--- Type: Comment Data: ; This is a single line comment ---### TOKEN ###--- Type: Comment Data: ; Mary had a little lamb ---### TOKEN ###--- Type: Comment Data: ; Hello from EasyCodeIt! ---### TOKEN ###--- Type: Word Data: MsgBox ---### TOKEN ###--- Type: Bracket Data: ( ---### TOKEN ###--- Type: Number Data: 0 ---### TOKEN ###--- Type: Comma Data: , ---### TOKEN ###--- Type: Number Data: 0 ---### TOKEN ###--- Type: Comma Data: , ---### TOKEN ###--- Type: Number Data: 1 ---### TOKEN ###--- Type: Operator Data: + ---### TOKEN ###--- Type: Number Data: 2 ---### TOKEN ###--- Type: Operator Data: - ---### TOKEN ###--- Type: Number Data: 3 ---### TOKEN ###--- Type: Operator Data: * ---### TOKEN ###--- Type: Number Data: 4 ---### TOKEN ###--- Type: Operator Data: / ---### TOKEN ###--- Type: Number Data: 5 ---### TOKEN ###--- Type: Operator Data: ^ ---### TOKEN ###--- Type: Number Data: 6 ---### TOKEN ###--- Type: Bracket Data: ) ---### TOKEN ###--- Type: Comment Data: ; The result is 2.999232 if anyone is wondering And here is the script that I tested with: #include <Motivation.au3> ; This is a single line comment ; Mary had a little lamb ; Hello from EasyCodeIt! MsgBox(0, 0, 1 + 2 - 3 * 4 / 5 ^ 6) ; The result is 2.999232 if anyone is wondering I hope to have more updates soon and hopefully the scanner/tokenizer will be completed! The next step would be actually analyzing the tokens and constructing a source tree from it (syntactic analysis is the buzz word)
-
Sorry guys, I really dropped the ball. I could not write satisfactory code for parsing so I kept delaying work on it and then things happened. One of those things was the opportunity to work again, I have not done any freelance work since I lost my regular client due to the effect of COVID on their business, so I was excited. I finally convinced myself to make some progress today, and I did I have some not-so-good code which is able to parse comments... and only comments. That is the most basic thing I could find so I worked on it. Here is how it currently works: J:\Projects\EasyCodeIt\build_win>eci test.au3 ---### TOKEN ###--- Type: Comment Data: ; This is a single line comment ---### TOKEN ###--- Type: Comment Data: ; Mary had a little lamb ---### TOKEN ###--- Type: Comment Data: ; Wow comments Unknown token encountered and the source for test.au3: ; This is a single line comment ; Mary had a little lamb ; Wow comments MsgBox Here is the current parsing code that I have written: /* * This file is part of EasyCodeIt. * * Copyright (C) 2020 TheDcoder <[email protected]> * * EasyCodeIt is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <https://www.gnu.org/licenses/>. */ #include <stdbool.h> #include <stdio.h> #include "parse.h" #include "utils.h" const char CHR_WHITESPACE[] = {' ', '\t', '\n'}; const char CHR_COMMENT = ';'; const char CHR_DIRECTIVE = '#'; struct TokenCharMapElem { enum TokenType type; union { const char chr; const char *chr_arr; }; }; static void print_token(struct Token *token) { puts("---### TOKEN ###---"); char *token_type; switch (token->type) { case TOK_WHITESPACE: token_type = "Whitespace"; break; case TOK_COMMENT: token_type = "Comment"; break; case TOK_DIRECTIVE: token_type = "Directive"; break; } fputs("Type: ", stdout); puts(token_type); fputs("Data: ", stdout); for (size_t c = 0; c < token->data_len; c++) putchar(token->data[c]); putchar('\n'); } void parse(char *code) { tokenize(code); } void tokenize(char *code) { char *curr_char = code; while (true) { // Check if whitespace if (chrcmp(*curr_char, (char *) CHR_WHITESPACE, sizeof CHR_WHITESPACE)) { // Advance to the next character ++curr_char; continue; } // Do it all manually // Comment if (*curr_char == CHR_COMMENT) { struct Token tok = { .type = TOK_COMMENT, .data = curr_char, }; for (++curr_char; *curr_char != '\n' && *curr_char != '\0'; ++curr_char); tok.data_len = (curr_char - tok.data); print_token(&tok); // Advance to the next character ++curr_char; continue; } // EOF (Null terminator) if (*curr_char == '\0') break; // Error die("Unknown token encountered"); } } There is definitely room for improvement, and I am working on it. Wish me luck everyone
-
You should definitely link to the this topic, but copy-pasting your first post is also an option Also including your code sample is a must!
-
@dmob Please create a ticket in the bug-tracker so that the devs can track this bug.
-
Ah, nice I see, so you need to search the contents of the files themselves... Aside from RTFC's recommendation. You may also want to look into the "silver searcher" (ag) which is an efficient searcher for text in files:
-
Not sure why you want to use the search feature of explorer, does it work better now in Windows 10? It was always slow and usually not useful in Windows XP and 7. If you want to take advantage of indexing, there is a software called Everything which indexes all files on a NTFS partition. Its search is crazy fast . I think it might even have a CLI, which you can use from AutoIt.
-
Sorry guys, it was kind of a bad day today, the day started with a power outage which lasted for a good 4 hours, which left me muggy and irritated. After that I got busy with lunch, catching up with other stuff, watching videos etc. So couldn't really take out time for coding. I did make some minor progress, wrote a structure for storing tokens and a skeleton function. I now have to figure out how to deal with white-space, which should be relatively easy. Hope to have something working by tomorrow
-
I am working on a tokensizer which will be the first stage in parsing, some call it the scanner phase. It is basically taking the raw text and converting it into tokens (variable, string, number, operator, keyword etc.), which are much easier to work with in code, the next step would be syntactic analysis, which is just a fancy word to describe to process of checking if the tokens are in the right order (e.x. checking if 123 = $number is in the right order... spoiler alert: it is not in the right order). As usual, not much work code wise but I have a plan in my mind now, and I spent most of my time today researching how enumerators work in C and if linked-lists are really the best way to store dynamically allocated data. Another nice thing is that I have finally created a repository with the code: https://github.com/DcodingTheWeb/EasyCodeIt The code is very basic right now, it just prints the contents of the script to the standard output, I will update it as I work on features. Don't forget to give the repository a 🌟 star if you like what I am doing