Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Kotonya N, Toni F, 2020,

    , 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP(1) 2020), Publisher: ACL, Pages: 7740-7754

    Fact-checking is the task of verifying the veracity of claims by assessing their assertions against credible evidence. The vast major-ity of fact-checking studies focus exclusively on political claims. Very little research explores fact-checking for other topics, specifically subject matters for which expertise is required. We present the first study of explainable fact-checking for claims which require specific expertise. For our case study we choose the setting of public health. To support this case study we construct a new datasetPUBHEALTHof 11.8K claims accompanied by journalist crafted, gold standard explanations(i.e., judgments) to support the fact-check la-bels for claims1. We explore two tasks: veracity prediction and explanation generation. We also define and evaluate, with humans and computationally, three coherence properties of explanation quality. Our results indicate that,by training on in-domain data, gains can be made in explainable, automated fact-checking for claims which require specific expertise.

  • Conference paper
    Cocarascu O, Stylianou A, Cyras K, Toni Fet al., 2020,

    , 24th European Conference on Artificial Intelligence (ECAI 2020), Publisher: IOS Press, Pages: 2449-2456

    Today’s AI landscape is permeated by plentiful data anddominated by powerful data-centric methods with the potential toimpact a wide range of human sectors. Yet, in some settings this po-tential is hindered by these data-centric AI methods being mostlyopaque. Considerable efforts are currently being devoted to defin-ing methods for explaining black-box techniques in some settings,while the use of transparent methods is being advocated in others,especially when high-stake decisions are involved, as in healthcareand the practice of law. In this paper we advocate a novel transpar-ent paradigm of Data-Empowered Argumentation (DEAr in short)for dialectically explainable predictions. DEAr relies upon the ex-traction of argumentation debates from data, so that the dialecticaloutcomes of these debates amount to predictions (e.g. classifications)that can be explained dialectically. The argumentation debates con-sist of (data) arguments which may not be linguistic in general butmay nonetheless be deemed to be ‘arguments’ in that they are dialec-tically related, for instance by disagreeing on data labels. We illus-trate and experiment with the DEAr paradigm in three settings, mak-ing use, respectively, of categorical data, (annotated) images and text.We show empirically that DEAr is competitive with another transpar-ent model, namely decision trees (DTs), while also providing natu-rally dialectical explanations.

  • Journal article
    Calvo RA, Peters D, Cave S, 2020,

    , Nature Machine Intelligence, Vol: 2, Pages: 89-91, ISSN: 2522-5839
  • Conference paper
    Lertvittayakumjorn P, Toni F, 2019,

    , 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Publisher: ACL Anthology, Pages: 5195-5205

    Due to the black-box nature of deep learning models, methods for explaining the models’ results are crucial to gain trust from humans and support collaboration between AIsand humans. In this paper, we consider several model-agnostic and model-specific explanation methods for CNNs for text classification and conduct three human-grounded evaluations, focusing on different purposes of explanations: (1) revealing model behavior, (2)justifying model predictions, and (3) helping humans investigate uncertain predictions.The results highlight dissimilar qualities of thevarious explanation methods we consider andshow the degree to which these methods couldserve for each purpose.

  • Conference paper
    膶yras K, Letsios D, Misener R, Toni Fet al., 2019,

    , Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), Publisher: AAAI, Pages: 2752-2759

    Mathematical optimization offers highly-effective tools for finding solutions for problems with well-defined goals, notably scheduling. However, optimization solvers are often unexplainable black boxes whose solutions are inaccessible to users and which users cannot interact with. We define a novel paradigm using argumentation to empower the interaction between optimization solvers and users, supported by tractable explanations which certify or refute solutions. A solution can be from a solver or of interest to a user (in the context of 'what-if' scenarios). Specifically, we define argumentative and natural language explanations for why a schedule is (not) feasible, (not) efficient or (not) satisfying fixed user decisions, based on models of the fundamental makespan scheduling problem in terms of abstract argumentation frameworks (AFs). We define three types of AFs, whose stable extensions are in one-to-one correspondence with schedules that are feasible, efficient and satisfying fixed decisions, respectively. We extract the argumentative explanations from these AFs and the natural language explanations from the argumentative ones.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1247&limit=30&page=5&respub-action=search.html Current Millis: 1777127533942 Current Time: Sat Apr 25 15:32:13 BST 2026