Skip to content
QtWS25 Call for Papers
  • 141k Topics
    706k Posts
    C

    First step is to make debug build:

    /usr/lib/qt6/bin/qt-cmake -DCMAKE_BUILD_TYPE=Debug ../src/

    If you are using a GCC tool chain then this will build your executables with debug symbols that gdb can work with using the -g option:

    make VERBOSE=1 ... /usr/bin/c++ -g ...

    If you must force the -ggdb optionor the -gn variations then you'll likely have to do that that manually in the CFLAGS and CXXFLAGS.

  • Jobs, project showcases, announcements - anything that isn't directly development
    4k Topics
    22k Posts
    C

    @ALPs said in Can't reply to a post because my comment gets flagged as spam:

    Perhaps contact Akismet.com, so they appreciate there is an issue.
    At the same time, maybe Qt could pressure Akismet.com to resolve this issue.

    I think it's a plugin, so the service itself isn't in direct control of this. It's probably a slightly aggressive configuration that is blocking me. I've been digging for more info and the common filter has been the reputation on the forum.

    @VRonin said in Can't reply to a post because my comment gets flagged as spam:

    Try uploading your code to https://pastebin.com/ and post the link instead

    That might work, but I imagine links being filtered just like code blocks. I'll try the pastebin if I don't get this working pretty soon.

  • Everything related to designing and design tools

    108 Topics
    329 Posts
    A

    ok seems I didn't search properly

    https://forum.qt.io/topic/136316/importing-data-to-ds-from-figma
    https://forum.qt.io/topic/136905/is-qtbridge-available-with-the-open-source-license/2

    So it seems that the QML bridge plugin can be used with the free version of Figma, but not with the open source version of Qt ? So that designs could be exported from Figma with a free version of Figma, but could be imported to DS only with Qt Entreprise licence ?
    Those posts are two years old. Is this still the case ?

  • Everything related to the QA Tools

    56 Topics
    164 Posts
    T

    I recently had the opportunity to work with Bitsquery Web Retriever after falling victim to a cryptocurrency scam, and I can’t express how grateful I am for their help. From the moment I reached out, their team emphasized trust and integrity, which immediately put me at ease.

    What truly sets Bitsquery apart is their commitment to transparency. They took the time to explain their services, fees, and processes in a way that was easy to understand. I felt confident embarking on this journey, knowing I was in good hands.

    Their expertise in crypto tracing and asset recovery is evident. Using advanced forensic tools and blockchain analysis, they were able to track down my stolen cryptocurrency much faster than I expected. The cutting-edge tracing technology they employ really cuts down investigation times, and I appreciated their collaboration with law enforcement, which added an extra layer of reassurance.

    The testimonials I read before choosing Bitsquery proved to be accurate; they are professionals who genuinely care about their clients. My experience mirrored those positive accounts, and I was impressed by their dedication to client satisfaction.

    I can confidently say that Bitsquery Web Retriever is the go-to expert for cryptocurrency restoration. They have all the right tools, resources, and knowledge to handle a range of cases effectively. Whether you’re dealing with ransomware, user deletions, or other issues, their team is well-equipped to help you reclaim your assets.

    If you find yourself in a similar situation, I highly recommend Bitsquery Web Retriever. Their unmatched forensic capabilities and commitment to helping clients restore their value make them an ideal partner in recovering stolen cryptocurrency. Thank you, Bitsquery, for helping me regain my peace of mind!

  • Everything related to learning Qt.

    375 Topics
    2k Posts
    M

    @SGaist Thanks a lot.
    Actually I know that book too. Added to cart.
    I guess I should read that book and load my bullets.
    Thanks again!

  • 2k Topics
    12k Posts
    JonBJ

    @VRonin , @J-Hilk
    So nobody wanted to examine/implement my suggested algorithm, shame.... :( ;-)

    So.... I sat down this morning and provided an implementation of my approach. Here is complete, standalone code (C++ 17+): it has a main() which generates a bunch of the min-max range pairs randomly, calls the ChatGPT best implementation (i.e. @VRonin's second one) and my "jon" implementation (extensively commented), and reports the answer and the time for each.

    #include <algorithm> #include <chrono> #include <iostream> #include <random> #include <vector> std::pair<int, int> chatGPTNumberInMostIntervals(const std::vector<std::pair<int, int>>& intervals, int upperLimit) { if (intervals.empty()) return {-1, -1}; std::vector<int> frequency(upperLimit + 2, 0); // Use +2 to handle boundary at upperLimit properly // Mark the start and end points for each interval for (const auto& interval : intervals) { ++frequency[interval.first]; --frequency[interval.second + 1]; } // Apply the prefix sum technique to get the frequency of each number int maxCount = 0; int result = 0; int currentCount = 0; for (int i = 0; i <= upperLimit; ++i) { currentCount += frequency[i]; if (currentCount > maxCount) { maxCount = currentCount; result = i; } } return {result, maxCount}; } std::pair<int, int> jonNumberInMostIntervals(const std::vector<std::pair<int, int>>& intervals, int upperLimit) { if (intervals.empty()) return {-1, -1}; // vector of size intervals.size() // the pair<int, int> elements will hold <min-bound, count-of-min-bound> as each eleemnt std::vector<std::pair<int, int>> frequency(intervals.size()); // fill frequency[] with the min-bounds in each first-member, count in second-member doesn't matter here for (int i = 0; i < intervals.size(); i++) frequency[i] = {intervals[i].first, 0 }; // sort frequency[] by the min-bounds value in each element, ascending std::sort(frequency.begin(), frequency.end(), [](const std::pair<int, int> &x, const std::pair<int, int> &y) { return x.first < y.first; }); // fill the sorted frequency[] count-of-min-bound in each element // frequency[0].count = 1, frequency[1].count = 2, frequency[i].count = i + 1 for (int i = 0; i < frequency.size(); i++) frequency[i].second = i + 1; // go through the interval[] pair elements (random order for max range values) // go *down* through the frequency[] pair elements (sorted by min range values ascending) // while the freqency min values are greater than the interval max value decrement that element's frequency count for (int i = 0; i < intervals.size(); i++) for (int j = frequency.size() - 1; j >= 0 && frequency[j].first > intervals[i].second ; j--) frequency[j].second--; // find the frequency[] pair element with the greatest count auto most_frequent = std::max_element(frequency.begin(), frequency.end(), [](const std::pair<int, int> &x, const std::pair<int, int> &y) { return x.second < y.second; }); return *most_frequent; } int main() { constexpr int upperLimit = 1000000000; constexpr int num_intervals = 10; std::vector<std::pair<int, int>> intervals; std::random_device rd; std::mt19937 gen(rd()); std::uniform_int_distribution<int> distrib(0, 100); for (int i = 0; i < num_intervals; i++) { std::pair<int, int> pair(distrib(gen), distrib(gen)); if (pair.first > pair.second) std::swap(pair.first, pair.second); intervals.push_back(pair); std::cout << "(" << pair.first << ", " << pair.second << ")" << std::endl; } uint64_t start_time = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count(); std::pair<int, int> result = chatGPTNumberInMostIntervals(intervals, upperLimit); uint64_t end_time = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count(); std::cout << "ChatGPT: " << "Number: " << result.first << ", Frequency: " << result.second << ", Time: " << end_time - start_time << std::endl; start_time = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count(); result = jonNumberInMostIntervals(intervals, upperLimit); end_time = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count(); std::cout << "jon: " << "Number: " << result.first << ", Frequency: " << result.second << ", Time: " << end_time - start_time << std::endl; return 0; }

    I am deliberately testing with 10 interval ranges but 1 billion(!) as upperLimit, i.e. large.

    In that case ChatGPT uses (a) a large amount of memory (its frequency[] vector is size 1 billion elements) and (b) a large amount of iterations/time (it looks through the 1 billion elements in the vector). OTOH mine uses (a) small memory (frequency[] vector is same size as intervals[] vector) and (b) small amount of iterations/time (it looks through just the intervals[]/frequency[] vector elements).

    Here is the output compiled and run for debug:

    ChatGPT: Number: 17, Frequency: 6, Time: 6391 jon: Number: 17, Frequency: 6, Time: 0

    and here compiled and run for release:

    ChatGPT: Number: 45, Frequency: 8, Time: 2355 jon: Number: 45, Frequency: 8, Time: 0

    So, I don't mean to blow my trumpet, but mine is somewhere between thousands and "infinitely" faster than the ChatGPT one, and uses like a hundred-millionth of the memory, at least with a "large" upperLimit... ;-) I realise it won't make so much difference with a "smaller" upper limit, and that is your RL case, but still....

    @VRonin
    Having taken the time to produce this to address your "what I'm doing now but feels sooooo inefficient", I hope you might take a look it/test t for yourself.

    My conclusion: Mine is more work figuring it than copying a (decent) solution from ChatGPT, but in view of the vast efficiency and space improvements I feel I do not yet need to hang up my programming clogs, I feel I am not replaced by ChatGPT, yet... :D

    P.S.
    In the interests of clarity/fairness I must admit where my algorithm is not so good. As you increase the num_intervals mine will use more memory for the frequency[] array and take more time through its loops/sorting, where ChatGPT's will be relatively unaffected. Basically, for both speed and space, ChatGPT's is affected by upperLimit size while mine is affected by the size of num_intervals. So the "best" depends on whether you have a large range-bound in your ranges or a large number of ranges.

  • 4k Topics
    17k Posts
    M

    Non. La machine d'états (ME par la suite) passe d'un état à l'autre, c'est tout. Pour autant j'utilise dans ce même programme deux autres ME qui ne posent pas de soucis, tout fonctionne comme prévu (et d'ailleurs cela m'économise du temps, du code et me permet de formaliser graphiquement la dynamique système).

    Après quelques tests je suis arrivé à la conclusion que les évènements de la ME sont traités en dernier juste avant de rendre la main à l'utilisateur de l'IHM, ou alors c'est la boucle d'évènements qui est traitée après le code.
    Ainsi les changements d'états ne sont pas réalisés de suite après un submitEvent(), ce qui pose problème quand on interroge l'état actif de la ME juste après le submitEvent() et avant que l'appli ne redonne la main à l'utilisateur. D'où la solution envisagée, déconseillée par la doc Qt, mais qui fonctionne.

    Tu proposes d'arrêter la ME après chaque submitEvent() ? Auquel cas il faudrait la redémarrer avant de faire un nouveau changement d'états ?

    J'ai imaginé une autre solution sans savoir si ça pourrait marcher :

    au sein de l'application lancer une autre boucle d'évènements ; raccrocher les machines d'états à cette nouvelle boucle d'évènements qui ne serait pas "poluée" par les évènements IHM.

    Dans l'hypothèse ou le code de la boucle d'évènements n'est exécuté qu'après le code de l'application, cela ne résoudrait rien, sauf si cette nouvelle boucle d'évènement pour la ME est lancée dans un autre thread. Mais là pour moi, c'est encore un territoire inconnu...

  • This is where all the posts related to the Qt web services go. Including severe sillyness.
    1k Topics
    10k Posts
    D

    I had the same goal. I used Docker (Windows 10) and CROPS to run Debian container on Win10 to cross compile.
    With a hefty machine (8-12 cores, 32 GB RAM) I am able to cross compile from Win10 to the target (Toradex Colibri) pretty easily

    https://docs.yoctoproject.org/dev-manual/start.html#setting-up-to-use-cross-platforms-crops