Fleng 22 (concurrent logic programming)

(call-with-current-continuation.org)

120 points | by 082349872349872 a day ago

9 comments

  • sevensor a day ago

    Interesting, but I’m a bit overwhelmed by the presentation of three different languages at once. Suppose I have a scheduling problem; could I use fleng to obtain feasible schedules, and would it be the right tool for the job?

    • PaulHoule 21 hours ago

      I am excited to see that it can get customized over several dialects.

      In the Japanese 5th generation project they thought they could parallelize Prolog but found out early on that Prolog could not be parallelized so they came up with KL1 which could be parallelized but is not as nice as Prolog.

      I'd love to have a "language construction set" where I could trade off expressiveness and efficiency and such.

      My take on facts and rules is that they are somewhat portable between different regimes. For instance the same set of rules can work well in a forward chained mode as in a RETE rules engine or in a backward chained mode using Prolog or even in an SMT solver for consistency checking.

      I call it "rules and schemes" where you reuse the same rules with different execution strategies to solve different inference problems. In fact you want it to be easy to move work between build and run time.

    • sinuhe69 17 hours ago

      For scheduling and other classic constrained optimization problems, I think language like Zinc or Picat are the best. They are quick to learn and you can have the result in no time.

    • Avshalom a day ago

      I think probably no, it seems to be more of a lowest-common-denominator/intermediate-language to be compiled-to. Strand or FGHC could be right-tools however (well except being extremely niche languages)

  • nihil75 6 hours ago

    I don't get it. How is this different than starting new threads?

    In the article example, it doesn't look like anything is returned from each parallel function call. the main loop just invokes the func for each I, and they print when done. No shared memory, no scheduling or ordering.. what's the advantage here?

    In code examples, seems shared memory & scheduling are not a thing either. More like functional or chain programming - a function calls next func and passes output to it. Each loop runs independently, asynchronously from others. Reminds me of ECS model in gamedev.

    That's great and all, but it doesn't solve or simplify intricacies of parallel programming so much as it circumnavigates them, right?

    Is the advantage it being low-level and small?

    I think the same "concept" can be done in Bash: ```for i in $(seq 1 100); do fizzbuzz $i & ; done```

    • cess11 6 hours ago

      What is the equivalent of Prolog facts in your Bash example? Are they as easy to add and retract as in Prolog?

      • nihil75 5 hours ago

        Are Facts used in the the Fleng fizzbuzz example?

        You're probably right - I'm sure this has more features coming from Logic programming, And I'm just too hung-up on the Concurrent part of the title.

        • cess11 4 hours ago

          Sure, there's one, 'loop2(_, 101).'.

          If it wasn't a toy problem but rather a larger set of rules describing a more salient algorithm it would matter more whether you could pour in more facts as data enters the system.

          I get your point, I personally do a lot of crude concurrency with POSIX fork() and shell spawns from within suitable programming languages, e.g. Picolisp, Elixir.

          • nihil75 an hour ago

            Thanks! appreciate your input and perspective.