2025 VIS Area Curation Committee Executive Summary
Introduction
This report summarizes the findings, recommendations, and process by the VIS Area Curation Committee (ACC) regarding the areas and keywords used for paper submissions to IEEE VIS 2025. It is based on previous ACC committee reports, updated with the 2025 data. According to the Charter, the goal of this committee is to analyze and report how submissions made use of the areas and keywords to describe their contribution. It is important to understand when these descriptors no longer adequately cover the breadth of research presented at VIS.
We use submission and bidding information from VIS 2025 to analyze recent trends following the move to an area model.
The full data and source code to rebuild this project are available here.
Committee members 2025: Jean-Daniel Fekete (co-chair), Alexander Lex (co-chair), Helwig Hauser, Ana Crisan.
Last edited: 2025-10-17.
Executive Summary
2025 does not surface new major trends compared to previous reports; hence our recommendations remain the same, possibly with less overall urgency.
Overall, the area model appears to be successful.
The growth seen in the Application area in 2023 and 2024 has reverted slightly to 144 submitted papers. Given the still high number of submissions, we recommend taking action to address the load for APCs of the Application and Theoretical & Empirical areas.
Acceptance rates have returned to a level similar to before 2024 (which saw the lowest ever acceptance rate) this year. There is a slight uptick in desk rejects (4.7%).
Variability of acceptance rates between areas is substantial and has worsened relative to 2024. In particular, the area Analytics and Decision making has the lowest acceptance rate of its history. This trend may warrent further investigation.
Keywords are (with a small exception) well distributed, and the unified PC appears to provide broad and overlapping coverage.
import itertoolsimport pandas as pdimport numpy as np# Import the necessaries librariesimport plotly.offline as pioimport plotly.graph_objs as goimport plotly.express as px# [jdf] no need to specify the renderer but, for interactive use, init_notebook should be called# pio.renderers.default = "jupyterlab"# Set notebook mode to work in offline# pio.init_notebook_mode()# pio.init_notebook_mode(connected=True)width =750import sqlite3#### Data Preparation# static data – codes -> names etc.staticdata =dict( decision = { 'C': 'Confer vs. cond Accept', # relevant for the 2020 and 2021 data have a different meaning'A': 'Accept', # for the 2020 data'A2': 'Accept', # after the second round, should be 120 in 2022'R': 'Reject', # reject after the first round -- should be 322 in 2022'R2': 'Reject in round 2', # reject after the second round -- should be 2 in 2022'R-2nd': 'Reject in round 2', 'R2-S': 'Reject in round 2', # 2025'DR-S': 'Desk Reject (Scope)', # should be 7 in 2022'DR-P': 'Desk Reject (Plagiarism)', # should be 4 in 2022'AR-P': 'Admin Reject (Plagiarism)', # should be 1 in 2022'DR-F': 'Desk Reject (Format)', # should be 4 in 2022'DR': 'Desk Reject', # 2025'R-Strong': 'Reject Strong', # cannot resubmit to TVCG for a year'T': 'Reject TVCG fasttrack', # Explicitly invited to resubmit to TVCG, status in major revision }, FinalDecision = { # Just flatten to Accept, Desk-Reject, and Reject'C': 'Accept', 'A': 'Accept', # for the 2020 data'A2': 'Accept', # after the second round, should be 120 in 2022'R': 'Reject', # reject after the first round -- should be 322 in 2022'R2': 'Reject', # reject after the second round -- should be 2 in 2022'R-2nd': 'Reject', 'R2-S': 'Reject','DR-S': 'Desk-Reject', # should be 7 in 2022'DR-P': 'Desk-Reject', # should be 4 in 2022'AR-P': 'Desk-Reject', # should be 1 in 2022'DR-F': 'Desk-Reject', # should be 4 in 2022'DR': 'Desk-Reject', # in 2025'R-Strong': 'Reject','T': 'Reject', }, area = {'T&E': 'Theoretical & Empirical','App': 'Applications','S&R': 'Systems & Rendering','R&I': 'Representations & Interaction','DTr': 'Data Transformations','A&D': 'Analytics & Decisions', }, bid = { 0: 'no bid',1: 'want',2: 'willing',3: 'reluctant',4: 'conflict' }, stat = {'Prim': 'Primary', 'Seco': 'Secondary' }, keywords = pd.read_csv("../data/2021/keywords.csv", sep=';'), # 2021 is correct as there was no new keywords file in 2022 colnames = {'confsubid': 'Paper ID','rid': 'Reviewer','decision': 'Decision','area': 'Area','stat': 'Role','bid': 'Bid' })DecisionColor = {'Accept': 'green','Reject': 'orange','Desk-Reject': 'red',}DecisionOrder = {'Accept': 2,'Reject': 1,'Desk-Reject': 0, }dbcon = sqlite3.connect('../data/vis-area-chair.db') #[jdf] assume data is in ..# submissions_raw20 = pd.read_sql_query('SELECT * from submissions WHERE year = 2020', dbcon, 'sid')# submissions_raw21 = pd.read_sql_query('SELECT * from submissions WHERE year = 2021', dbcon, 'sid')# submissions_raw22 = pd.read_sql_query('SELECT * from submissions WHERE year = 2022', dbcon, 'sid')# submissions_raw23 = pd.read_sql_query('SELECT * from submissions WHERE year = 2023', dbcon, 'sid')# submissions_raw24 = pd.read_sql_query('SELECT * from submissions WHERE year = 2024', dbcon, 'sid')# submissions_raw25 = pd.read_sql_query('SELECT * from submissions WHERE year = 2024', dbcon, 'sid')submissions_raw = pd.read_sql_query('SELECT * from submissions', dbcon, 'sid')#print(submissions_raw24)submissions = (submissions_raw .join( pd.read_sql_query('SELECT * from areas', dbcon, 'aid'), on='aid' ) .assign(Keywords =lambda df: (pd .read_sql_query('SELECT * FROM submissionkeywords', dbcon, 'sid') .loc[df.index] .join( pd.read_sql_query('SELECT * FROM keywords', dbcon, 'kid'), on='kid' ) .keyword .groupby('sid') .apply(list) )) .assign(**{'# Keywords': lambda df: df.Keywords.apply(len)}) .assign(FinalDecision=lambda df: df['decision']) .replace(staticdata) .rename(columns = staticdata['colnames']) .drop(columns = ['legacy', 'aid']) .assign(DecisionColor=lambda df: df['FinalDecision'].map(DecisionColor), DecisionOrder=lambda df: df['FinalDecision'].map(DecisionOrder))# .set_index('sid')# .set_index('Paper ID')# note -- I changed the index, since 'Paper ID' was not unique for multiple years.# By not setting the index to 'Paper ID' the index remains with 'sid'.# However, 'sid' is used as a unique index in the creation of the database anyways.)# replace the old 'Paper ID' with a unique identifier, so that the code from 2021 will worksubmissions = submissions.rename(columns = {'Paper ID':'Old Paper ID'})submissions.reset_index(inplace=True)submissions['Paper ID'] = submissions['sid']submissions = submissions.set_index('Paper ID')#submissions colums: (index), sid (unique id), Paper ID (unique), Old Paper ID, Decision, year, Area, Keywords (as a list), # Keywordsall_years = submissions['year'].unique()#rates_decision computes the acceptance rates (and total number of papers) per year#rates_decision: (index), Decision, year, count, Percentagerates_decision = (submissions .value_counts(['Decision', 'year']) .reset_index()# .rename(columns = {0: 'count'}))rates_decision['Percentage'] = rates_decision.groupby(['year'])['count'].transform(lambda x: x/x.sum()*100)rates_decision = rates_decision.round({'Percentage': 1})#rates_decision computes the acceptance rates (and total number of papers) per year#rates_decision: (index), Decision, year, count, Percentagerates_decision_final = (submissions .value_counts(['FinalDecision', 'year']) .reset_index()# .rename(columns = {0: 'count'}))rates_decision_final['Percentage'] = rates_decision_final.groupby(['year'])['count'].transform(lambda x: x/x.sum()*100)rates_decision_final = rates_decision_final.round({'Percentage': 1})rates_decision_final["DecisionColor"] = rates_decision_final.FinalDecision.map(DecisionColor)#submissions#bids_raw: (index), Reviewer ID, sid (unique paper identifier over mult years), match score, bid of the reviewer, role of the reviewer, Paper IDbids_raw = (pd .read_sql_query('SELECT * from reviewerbids', dbcon) .merge(submissions_raw['confsubid'], on='sid') .replace(staticdata) .rename(columns = staticdata['colnames']))#bids_raw## Renaming Paper ID to Old Paper ID, setting Paper ID to sid, keeping all 3 for now...bids_raw = bids_raw.rename(columns = {'Paper ID':'Old Paper ID'})bids_raw['Paper ID'] = bids_raw['sid']# bids = Reviewer, sid, Bid (how the reviewer bid on this paper)# doesn't include review/sid that were not bid for [.query('Bid != "no bid"')]bids = (bids_raw .query('Bid != "no bid"')# Paper ID is not unique over multiple years!# .drop(columns = ['sid'])# [['Reviewer','Paper ID', 'Bid']] [['Reviewer','sid', 'Paper ID', 'Bid']] .reset_index(drop =True))# matchscores becomes a table to reviewer/sid with the match scores# many of these will be "NaN" since we now have multiple years together.# we need to check whether the reviewer IDs remain unique across the years!matchscores = (bids_raw# Paper ID is not unique over multiple years!# [['Reviewer','Paper ID','match']] [['Reviewer','sid','Paper ID','match']]# Paper ID is not unique over multiple years!# .set_index(['Reviewer', 'Paper ID']) .set_index(['Reviewer', 'Paper ID']) .match .unstack(level=1))# assignments = Reviewer, sid, Role (primary, secondary)# doesn't include review/sid that were not assigned [.query('Role != ""')]assignments = (bids_raw .query('Role != ""')# Paper ID is not unique over multiple years!# [['Reviewer', 'Paper ID', 'Role']] [['Reviewer', 'sid', 'Paper ID', 'Role']] .reset_index(drop =True))del dbcon#### Plot Defaultsacc_template = go.layout.Template()acc_template.layout =dict( font =dict( family='Fira Sans', color ='black', size =13 ), title_font_size =14, plot_bgcolor ='rgba(255,255,255,0)', paper_bgcolor ='rgba(255,255,255,0)', margin =dict(pad=10), xaxis =dict( title =dict( font =dict( family='Fira Sans Medium', size=13 ), standoff =10 ), gridcolor='lightgray', gridwidth=1, automargin =True, fixedrange =True, ), yaxis =dict( title =dict( font =dict( family='Fira Sans Medium', size=13 ), standoff =10, ), gridcolor='lightgray', gridwidth=1, automargin =True, fixedrange =True, ), legend=dict( title_font_family="Fira Sans Medium", ), colorway = px.colors.qualitative.T10, hovermode ='closest', hoverlabel=dict( bgcolor="white", bordercolor='lightgray', font_color ='black', font_family ='Fira Sans' ),)acc_template.data.bar = [dict( textposition ='inside', insidetextanchor='middle', textfont_size =12,)]px.defaults.template = acc_templatepx.defaults.category_orders = {'Decision': list(staticdata['decision'].values()),'FinalDecision': list(staticdata['FinalDecision'].values()),'Area': list(staticdata['area'].values()),'Short Name': staticdata['keywords']['Short Name'].tolist(),}config =dict( displayModeBar =False, scrollZoom =False, responsive =False)def aspect(ratio):return { 'width': width, 'height': int(ratio*width) }# useful data sub-products#k_all columns: (index), Paper ID, Old Paper ID, Decision, year, Area, Keywords (as a list), # Keywords, Keyword, Category, Subcategory, Short Name, Descriptionk_all = (submissions .join(submissions['Keywords'] .explode() .rename('Keyword') ) .reset_index(level =0) .merge(staticdata['keywords'], on='Keyword'))# (Old) Paper ID is not unique, however, the 'sid' is (which is the current index)#k_all.reset_index(inplace=True)#k_all.rename(columns = {'sid':'Paper ID'},inplace = True)#k_all = k_all.merge(staticdata['keywords'], on='Keyword')#k_all#k_total columns: Category, Subcategory, Short Name, Keyword, Description, #Submissions, year# counts the total number of submissions per keyword and yeark_total = staticdata['keywords'].merge( k_all.value_counts(['Short Name','year']) .rename('# Submissions') .reset_index(),# on = 'Short Name', how ='right'# how = 'outer')#k_cnt: how often was a particular keyword used among all submissions within a year????#k_cnt columns: (index), Short Name, year, c, Category, Subcategory, Keyword, Description# not clear how k_cnt and k_total differ!k_cnt = (k_all .value_counts(['Short Name','year'], sort=False) .rename('c') .to_frame() .reset_index() .merge(staticdata['keywords'], on='Short Name'))
Submissions
The number of submissions peaked in 2020 at 585 papers, which is likely due to the pandemic and the one-month extension to the deadline. The years 2021 and 2022 saw lower numbers of submissions, with 442 and 460 respectively. Submissions increased in 2023 (539) and 2024 (544), and dipped slightly in 2025 (537). Overall, this suggests a stable and healthy research field.
fig = px.bar(totals, y='year', x='count', orientation ='h', labels={'count':'Number of Submissions', 'year':'Year'}, text ='count',).update_layout( yaxis=dict(autorange="reversed", tickmode='linear'), title ='Submissions Numbers since 2020', xaxis_title ='Number of Submissions',**aspect(0.35))fig.show(config=config)
Acceptance Rates
Acceptance rates fluctuated slightly from 2020–2023 (26.8%, 24.9%, 26.1%, and 25.8%), though there was a dip (24.4%) in 2021. For 2024, we saw a rather sharp drop to 22.4%, which was partially caused by a lower first-round acceptance rate (23.2%) and amplified by three (unusual) second-round rejects. 2025 has seen a return to a level slightly lower than 2020–2023 but above the unusually low rate of 2024.
Code
fig = px.bar(rates_decision_final. sort_values(by='FinalDecision', key=lambda decision : decision.map(DecisionOrder)), x ='Percentage', y ='year', barmode ='stack', orientation ='h', color_discrete_map = DecisionColor, color='FinalDecision', text ='Percentage', custom_data = ['FinalDecision','count'],).update_layout( yaxis=dict(autorange="reversed", tickmode='linear'), title ='Acceptance Rates since 2020', xaxis_title ='Percentage of Submissions',**aspect(0.35)).update_traces( hovertemplate ='%{customdata[1]} submissions in %{y} have decision %{customdata[0]}<extra></extra>',).show(config=config)
Submissions across the areas are relatively stable between 2021 and 2024, with some notable exceptions. Applications has been a large area since the start of the area model (100 submissions in 2021), but saw growth up to 154 papers in 2024 and a slight dip in 2025 to 144 submissions. Applications and Theoretical & Empirical are more than twice as large as the smaller areas — Representation & Interaction, Data Transformations, and Systems & Rendering — indicating an uneven load for the area paper chairs. Representation & Interaction has dipped to similar levels seen from 2021–2023 after a spike in 2024.
Acceptance Rates in Areas
Code
recent_submissions = submissions[submissions['year'] !=2020]tmptotal = (recent_submissions .value_counts(['Area', 'year']) .reset_index() .rename(columns = {'count': 'total'}))tmp = (recent_submissions .value_counts(['Area', 'FinalDecision', 'year']) .reset_index()# .rename(columns = {0: 'count'}))tmpfinal = pd.merge(left=tmp, right=tmptotal, on=['Area','year'])tmpfinal['percentage']=round(tmpfinal['count']/tmpfinal['total'] *1000)/10.0tmpfinal.sort_values(by='FinalDecision', inplace=True, key=lambda decision : decision.map(DecisionOrder))fig = px.bar(tmpfinal, x ='year', y ='percentage', barmode ='stack', orientation ='v', color_discrete_map = DecisionColor, color ='FinalDecision', text ='percentage', custom_data = ['FinalDecision'], facet_col='Area', category_orders = {"year": [2021,2022, 2023, 2024]}, facet_col_spacing=0.06, # default is 0.03 ).update_layout( title ='Submissions by area and year', xaxis_title ='year', legend=dict( yanchor="top", y=1, # Adjust legends y-position xanchor="left", x=1.08, # ... and x-position to avoid overlapping ),**aspect(0.8) ).update_xaxes(type='category').update_traces( hovertemplate ='%{y}% of submissions in %{x} have decision %{customdata[0]}<extra></extra>', )fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1]))for i,a inenumerate(fig.layout.annotations):if (i%2): a.update(yshift=-15)# Add horizontal line at 75% for each subplotfig.add_shape(type="line", x0=0, x1=1, # from the left to the right of the plot y0=75, y1=75, # at y = 75% on the y-axis xref='paper', # relative to the entire plot width yref='y', # relative to the y-axis line=dict(color="Darkgray", width=2),)# Add a label next to the line at 75%fig.add_annotation( x=1, # Position near the end of the plot (right side) y=75, # Position at 75% on the y-axis xref='paper', # Relative to the entire plot width yref='y', # Relative to the y-axis text="75% Threshold", # The label text showarrow=False, # No arrow, just text font=dict(size=12, color="Black"), # Customize the font size and color xanchor='left', # Anchor the text to the left side of the x-position yanchor='middle'# Center the text vertically on the y-position)fig.show(config=config)
Acceptance rates were fairly consistent across areas in 2021, but not in 2022, 2023, 2024, or 2025.
Generally, Theoretical & Empirical seems to have higher acceptance rates than other areas.
Analytics & Decisions seems to be becoming substantially more selective every year, accepting only 14.9% of all submissions in 2025. Feedback from the area paper chairs for Analytics & Decisions points to the complexity of designing and evaluating integrated systems, the associated lack of an obvious singular technical contribution, and an increased focus on evidence of utility by reviewers. There is also concern that the area has entered a self-reinforcing downward spiral.
Systems & Rendering fluctuates over time but has returned to a healthier 21.7% after accepting only 16.7% of submissions in 2024. It is notable that Systems & Rendering is one of the smallest areas; hence, these fluctuations may be caused by a relatively small number of papers.
Keywords
And frequencies of the use of keywords range from 5 to 120. The keywords with the highest number of occurrences are not very useful for categorizing papers, but they are very meaningful, and differentiation works effectively with accompanying keywords. We believe that having five papers that use a keyword is sufficient to warrant retaining it.
Code
# do a manual histogram to include non-specified keywords# k_total['Submission %'] = k_total.groupby(['year'])['# Submissions'].transform(lambda x: x/x.sum()*100)# k_total['Year'] = k_total['year'].astype(str) # to get categorical colorsk_total['Year'] = k_total['year'].astype(int)k_year = k_total.pivot(index="year", values="Submission %", columns="Short Name").Tpx.scatter(k_total, y ='Short Name', x ='Submission %', # 'Submission %', color ='Year', color_continuous_scale="blues", # "greys", # "sunset", # "speed",# category_orders={"Year": ["2025", "2024", "2023", "2022", "2021", "2020"]}# facet_row='year',# category_orders={'year': reversed([2020, 2021, 2022, 2023, 2024])},).update_traces( hovertemplate ="'%{x}' specified in %{y} submissions<extra></extra>",).update_layout( yaxis_tickfont_size =8, yaxis_dtick =1, yaxis_tickmode ='linear',# yaxis_dtick = 50, hovermode ='closest', title ='Frequency of keywords across submissions',**aspect(1)).show(config=config)
Trends are visible when there’s a clear light-to-dark color pattern in one direction or the other. Few keywords exhibit that pattern:
Tabular, Applications, MultiView, go down
Methodology, Perception go up
The others are less clear.
Seniority and Reviewing
We also conducted a review of the seniority of the IPC members and an analysis of the average review score by seniority for the years 2024 and 2025 combined.
The following charts show the seniority of the program committee in a bar chart of the PhD year, and a histogram of “academic age”.
Number of PC members by PhD year.
We observe that the IPC is rather “young”, peaking at a graduation year of 2018, though there are a large number of committee members having graduated in the last 4 years, including 5 graduates of 2024.
Aggregated seniority of PC members.
The aggregated histogram shows that the largest bin of the IPC pool has graduated 6-10 years ago, and that 0-5 years are about equally represented as 11-15 and 16-20 years.
The following chart shows the relationship of review scores and seniority (including all reviewers, not only IPC members):
Relationship between seniority and review scores.
We observe that more junior reviewers tend to give slightly lower scores. The dotted regression line shows the “per review” trend, which is based on the data “as is”; while the solid line shows the regression after “normalizing” to mimic a (hypothetical) IPC with all seniority years evenly distributed. The normalized regression line intersects 2.625 at 0 years since PhD compared to 2.75 at 20 years since PhD). Given the smaller pool of senior reviewers, the data becomes increasingly noisy at 20+ years.