Wikifunctions is a new site that has been added to the list of sites operated by WMF. I definitely see uses for it in automating updates on Wikipedia and bots (and also for programmers to reference), but their goal is to translate Wikipedia articles to more languages by writing them in code that has a lot of linguistic information. I have mixed feelings about this, as I don’t like existing programs that automatically generate articles (see the Cebuano and Dutch Wikipedias), and I worry that the system will be too complicated for average people.

  • GenderNeutralBro@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    35
    ·
    6 months ago

    Sounds like a great idea. Plain English (or any human language) is not the best way to store information. I’ve certainly noticed mismatches between the data in different languages, or across related articles, because they don’t share the same data source.

    Take a look at the article for NYC in English and French and you’ll see a bunch of data points, like total area, that are different. Not huge differences, but any difference at all is enough to demonstrate the problem. There should be one canonical source of data shared by all representations.

    Wikipedia is available in hundreds of languages. Why should hundreds of editors need to update the NYC page every time a new census comes out with new population numbers? Ideally, that would require only one change to update every version of the article.

    In programming, the convention is to separate the data from the presentation. In this context, plain-English is the presentation, and weaving actual data into it is sub-optimal. Something like population or area size of a city is not language-dependent, and should not be stored in a language-dependent way.

    Ultimately, this is about reducing duplicate effort and maintaining data integrity.

    • schnurrito@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      13
      ·
      6 months ago

      This problem was solved in like 2012 or 2013 with the introduction of Wikidata, but not all language editions have decided to use that.

      • GenderNeutralBro@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        How common is it in English? I haven’t checked a lot of articles, but I did check the source of the English and French NYC articles I linked and it seems like all the information is hardcoded, not referenced from Wikidata.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        but not all language editions have decided to use that.

        Some people like their little power they call “meritocracy” to decide what belongs in the article and what doesn’t.

    • robotica@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      Disclaimer, I didn’t do any research on this, but what would be bad with just AI translating text, given a reliable enough AI? No code required, just plain human speech.

      • GenderNeutralBro@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 months ago

        This will help make machine translation more reliable, ensuring that objective data does not get transformed along with the language presenting that data. It will also make it easier to test and validate the machine translators.

        Any automated translations would still need to reviewed. I don’t think we will (or should) see totally automated translations in the near future, but I do think the machine translators could be a very useful tool for editors.

        Language models are impressive, but they are not efficient data retrieval systems. Denny Vrandecic, the founder of Wikidata, has a couple insightful videos about this topic.

        This one talks about knowledge graphs in general, from 2020: https://www.youtube.com/watch?v=Oips1aW738Q

        This one is from last year and is specifically about how you could integrate LLMs with the knowledge graph to greatly increase their accuracy, utility, and efficiency: https://www.youtube.com/watch?v=WqYBx2gB6vA

        I highly recommend that second video. He does a great job laying out what LLMs are efficient for, what more conventional methods are efficient for, and how you can integrate them to get the best of both worlds.

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    6 months ago

    I assume the main benefit will be for users of less-spoken languages, who currently get out-of date articles or none at all.

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    6 months ago

    but their goal is to translate Wikipedia articles to more languages by writing them in code that has a lot of linguistic information

    That’ll get unruly really fast.

    Languages simply don’t agree on how to split the usage of words. Or grammatical case. Or if, when and how to do agreement.

    Just for the sake of example: how are they going to keep track of case in a way that doesn’t break Hindi, or Basque, or English, or Guarani? Or grammatical gender for a word like “milk”? (not even the Romance languages agree in it.) At a certain point, it gets simply easier to write the article in all those languages than to code something to make it for you.


    I think that the best use scenario is to automate tidbits of highly changing data. It’s fairly limited but it could be useful.

    • Jojo@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      They’re just going to write all the articles in lojban.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        Not even that would do the trick - practical usage of Lojban heavily relies on fu’ivla, that carry with themselves the semantic scope assigned to the original words. .u’i I want to see them trying though.

    • Lvxferre@mander.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 months ago

      I’ll reply to myself to highlight a point, and issue a challenge for those who assume that WMF’s apparent goal - to translate Wikipedia articles to more languages by writing them in code that has a lot of linguistic information - is actually viable:

      Here’s an excerpt from an actual Wikipedia article: “the solubility of these gases depending on the temperature and salinity of the water.” Show me all the linguistic information that a writer would need to input, to convey the same information, in that system idealised by the goal, in a way that would not output “then who was phone?” tier nonsense for some languages. Then I’ll show you why it would still output nonsense for some languages.

      Too much work? Then feel free to do it just for “of the water”. It’s a single PP, how hard would it be? /s

      Hic Rhodes, hic salta.

      [Edit reason: clarity.]

  • abhibeckert@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    6 months ago

    Your description doesn’t seem to match what the site does? For example the front page has a function that converts uppercase text to lowercase text.

    It’s not article content - it’s an interactive utility.

    • flavia@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      The site itself is for contributors who want to create functions and write code for them. Examples of how it might be used in the future for articles:

      • Z11884 for articles about chemicals.
      • Z11302 for use in prose.