Ahh thanks! I'm not familiar at all with github, so just now getting a chance to explore it/figure things out. It's actually pretty decent idea.
Also, having some fun testing out new features:
#include "huc.h"
#include "string.h"
struct BMP {
int height;
int width;
char name[10];
};
struct BMP bmp;
struct BMP png;
struct BMP *bmp_ptr;
main()
{
set_color_rgb(1, 7, 7, 7);
set_font_color(1, 0);
set_font_pal(0);
load_default_font();
disp_on();
cls();
bmp.height = 1;
bmp.width = 1;
strcpy(bmp.name, "test");
put_string(bmp.name, 8, 5);
bmp_ptr = &bmp;
bmp_ptr->width = 256;
bmp_ptr->height = 192;
strcpy(bmp_ptr->name,"Ginger");
put_string(bmp.name, 8, 6);
bmp_ptr = &png;
bmp_ptr->height = 32;
bmp_ptr->width = 32;
strcpy(bmp_ptr->name,"Bonk");
put_string(png.name, 8, 7);
bmp_ptr = &bmp;
strcpy(bmp_ptr->name,"test2");
put_string(bmp_ptr->name, 8, 8);
while(1){};
return 0;
}
Note: I'm also looking at code improvements. The compiler sometimes optimizes for values kept in A:X, which is decent, but then there are occasions like temp++; which loads a 16bit value of temp (which is a char) into A:X, but then does a simple inc temp following it.
The other thing that bothers me, is that all 8bit values being extended into 16 values (A:X) are done at runtime instead of compile time, if the variable is signed. Because the compiler is doing this automated extension, it also means signed extension has to be handle at runtime. This means unsigned vars will always be faster than signed - for chars. If compiler didn't automatically extend everything into 16bit, this wouldn't be a problem. So this is an area where the compiler still needs work. I mean, unsigned char is just one extra byte and 2 extra cpu cycles (cla).
In other words, the compiler is automatically type casting every char. It shouldn't. There should a specific set of build-macros just for handling char values (I think Dave Shadoff was working on this as an optimization switch in the 3.21 version).