访问计数 183722 (自2016年5月)

    近年来,大众参与的软件创新与创业活动已经成为网络时代软件开发和应用的新形态,正在快速改变着全球软件创新模式和软件产业格局。系统地揭示这种网构化软件开发形态背后蕴含的核心机理,构建适应我国自主发展特点的软件创新生态环境,是当前我国软件产业发展面临的重大历史机遇。


    在此背景下,国防科技大学、北京大学、北京航空航天大学、中科院软件所等单位合作开展了基于网络的软件开发群体化方法与技术研究,揭示了以大众化协同开发、开放式资源共享、持续性可信评估为核心的互联网大规模协同机理,与软件开发工程化方法相结合,系统地提出了基于网络的软件开发群体化方法,形成了网构化软件开发和运行技术体系,构建了可信的国家软件资源共享与协同生产环境(简称“Trustie”,中文简称“确实”)。


    Trustie团队就是在此过程中不断成长的一个勇于探索、勇于创新、勇于挑战的科研人员组成的群体,其中包括大学老师、工程师、研究生和本科生。


    Trustie团队揭示了以大众化协同开发、开放式资源共享、持续性可信评估为核心的互联网大规模协同机理,与软件开发工程化方法相结合,系统地提出了基于网络的软件开发群体化方法,产生了三方面技术发明,构建了可信的国家软件资源共享与协同生产环境(简称“Trustie”,中文简称“确实”),形成授权发明专利26项、软件著作权38项、技术标准(或提案)7项,发明人受邀赴国内外重要学术会议做主题报告20余次,如图1。


image

图1 Trustie成果结构


    项目探索形成了技术成果专利化、专利推广标准化、工具环境服务化、人才培养大众化的成果转化模式,为我国创新型软件产业发展提供了关键技术支撑和实践指南。Trustie显著提升了东软集团、神州数码、凯立德、万达信息等大型软件企业软件生产能力,支持了我国航空、航天、国防等多个关键领域的可信软件生产,在9个软件园区建立了公共创新支撑平台,覆盖2500余家软件企业,积累软件资源超过33万项,创建了知名国际开源社区,支撑包括国家核高基重大专项、国际合作项目、教育项目等2560余个软件项目的群体化开发,在100余家高校的软件人才培养中得到广泛应用,各类用户超过28万人。


    项目的两项子成果已分别获得2013年度湖南省技术发明一等奖和2012年度教育部高等学校科学研究优秀成果奖科学技术进步一等奖,并于已通过2015国家技术发明奖二等奖初评。
    

image

图2 Trustie2.0软件创新创业服务平台


    目前,项目组已在网构化软件协同构造、运行管理、可信评估、持续演化等方面实现了一系列新的突破,提出并建立了网构化软件创新和创业的应用模式及支撑平台Trustie2.0,如图2。项目组正充满信心、刻苦攻关,为我国创新型国家建设而奋斗!


   目前Trustie通过互联网提供在线服务,目前推出的网络服务平台包括:

 Trustie实践教学平台  Trustie协同开发平台
 * Trustie开源监测与推荐平台  * Trustie可信资源库平台
 * Trustie服务组合开发平台  * Trustie可信评估与增强平台



发布时间:2015-11-15 10:49
最后编辑:尹刚
39?1705884565
指派给   未指派
发布时间: 2018-06-14 10:59
更新时间:2018-06-14 11:43
----------------------- REVIEW 1 ---------------------
PAPER: 171
TITLE: One Size Does Not Fit All: An Empirical Study of Containerized Continuous Deployment Workflows
AUTHORS: Yang Zhang, Bogdan Vasilescu, Huaimin Wang and Vladimir Filkov


----------- Summary -----------
This paper presents an empirical study of the workflows and tools used for containerized continuous deployment.  The authors identified a set of 1000 developers using docker for CD and deployed a survey to them about the workflows they use, the tools they use, their needs and pain points.  The authors also develop a series of hypotheses and then mine data from docker hub and github to support or refute the hypotheses.  The paper provides insight into why developers use CD, the tools they use, the workflows they follow, their desires about what should be changed/improved.

----------- Detailed evaluation -----------
I felt this was a very strong paper.  The topic is super relevant and yet under-studied in the SE research community.  The authors took a very pragmatic approach and began by actually asking the developers that were involved.  They then took a more quantitative approach using mined data to answer hypotheses.  In my view, this approach offers the best of both qual and quant worlds in research.  I found the writing easy to follow and didn't find any blatant errors in the paper.  This paper opens the door for further research, in terms of both empirical studies AND improved tools/processes.  

Some smaller things:

At the end of 3.4., the authors posit that "simplification is bound to reduce performance."  Why is this?  This isn't obvious to me.

There are small typo and grammar errors throughout.  Please fix for the camera ready.

In section 4.2., when discussing the difference in build results, the term "positive" means more build errors and "negative" means fewer.  This is backwards from the intuitive meaning of these terms.  I'd suggest either reversing them or at least being explicit about the meaning.

----------- Strengths and weaknesses -----------
Strengths:
 - the study is incredibly relevant right now.  CD has been in vogue for at least five years and docker has been used for CD  and other uses for at least a few years as well.  While there has been some work in either area, this is the first paper I'm aware of that looks at how containers (and Docker is THE name in containers) are used in CD workflows.  As thus, the novelty and value of the paper is high.
 - I like the mixed methods approach of surveying developers and also using mined data to answer hypothese.
 - I liked the categories of questions that were asked of developers.  Each gave different insights and the answers for each category (e.g. motivations, barriers, etc.) has relevance for different audiences, as made more explicit towards the end of the paper.
 - The statistics were well explained and well thought out, especially the mixed effects models used.
 - The authors were smart about the CD tools (DH, Travis, Circle) that they examined, trying to capture the primary tools used today.
 - I appreciated the descriptive statistics and the regression details.
 - Most insight boxes contained useful summary information.
 - I liked the comparison text at the end of 5.2.  This can be quite useful for practitioners.

Weaknesses:
 - some of the insight boxes have information that isn't useful.  For example: "The DHW and CIWs are different.  Using different CI tools can also result in different outcomes."  This is not informative at all.
 - Grammar and typographical errors throughout.

----------- Questions to the authors -----------
1. Will you make the survey text and responses available?
2. If the timeFlag has a negative impact on release frequency, is it possible that some projects languish or simply enter maintenance mode?  Did the authors check that all of the projects remained active?


----------------------- REVIEW 2 ---------------------
PAPER: 171
TITLE: One Size Does Not Fit All: An Empirical Study of Containerized Continuous Deployment Workflows
AUTHORS: Yang Zhang, Bogdan Vasilescu, Huaimin Wang and Vladimir Filkov


----------- Summary -----------
This paper describes two studies that seek to understand how developers are using containerization technology (specifically, Docker) to support continuous deployment (CD) workflows in software development. The first study qualitatively (via a survey instrument) examines the technologies, experiences, and perceived needs and challenges of a randomly selected set of containerization workflow users. This study resulted in a set of hypotheses about different aspects of containerized CD workflows, which were evaluated quantitatively in a second study. The paper identifies areas that require follow-on research, as well as some advice and insights for practitioners and service providers.

----------- Detailed evaluation -----------
Pragmatically speaking, the use of containerized CD workflows in modern agile development has now penetrated widely enough for us, as researchers, to conclude that (a) it isn't just a passing trend (and is, therefore, worthy of our attention), and (b) it is not always easy to be successful with containerized CD workflows. The time is right to ask how developers are leveraging containerized CD workflows in their production activities, what options they have, what work they have to do to be successful with existing containerized CD workflows, what is going well and what isn't, etc., towards the ultimate goal of helping people be more successful.

This paper steps into that breach. It reflects the first careful, sober research I have seen on these questions. It poses two relevant research questions and sets up a rigorous and well-executed mixed-methods study: one qualitative, and one quantitative.

The qualitative study (based on a survey of over 150 developers) was executed well and produced some interesting and useful insights from developers who are using containerized CD workflows. There were a few negative issues I noted:

    - First, on the positive side, I noted how carefully section 3.2 tied together the survey results with some of the prior literature, and I really appreciated that care and extra value-add. However, this needs to be done carefully. There are a few places (which I've pointed out in the detailed comments below) where it was not easy to tell whether a claim came from the authors or from the prior literature, and whether it was actually supported, as stated, by the survey results. Moreover, in a couple of places, it read like a product sales brochure, rather than as objective research. For example, "Chen [8] reported that CD allows delivering new software releases to customers more quickly. Previously, an application released once every one to six months. Now CD makes an application release once a week on average. Some applications can even release multiple times a day when necessary." This sounds like an advertisement. Contrast with something like this: "By leveraging afford!
 ances provided with CD, Chen noted that project teams can release once a week on average, or more frequently if desired. Some of our respondents confirmed this; e.g., R120 said..."

    - Second, while the "unmet needs" reported in Section 3.4 were among the most interesting results of the survey, some of these insights are somewhat superficial (e.g., N2 is a comment you could make about most large, complex, extensively configurable pieces of software), which reduces their utility. If the authors have more detailed information, I strongly suggest either including it here or ensuring that it is made available elsewhere and referenced (e.g., in the replication package.

    - Finally, the hypotheses identified in Section 3.4 all referred to attributes of builds that "tend" to increase or decrease over time. It was not clear to me how to understand this. Please provide a definition that will allow other researchers to reach the same conclusions if they do the same experiments and see the same kinds of results.

The quantitative study examined the differences between CD workflows and evaluated the hypotheses generated during the qualitative study. The study seemed to be set up and executed well, and there are some interesting results. My only real concern was whether the authors took steps to ensure that the set of projects they evaluated reflected a good sampling of project properties, and that they checked for the possibility of confounding variables (e.g., were the results affected by Java projects vs. Node projects, or by some attributes of the contributors, etc.).

Overall, I like this paper. I think it is a timely piece of work that has some useful insights on its own, and that clearly motivates the need for additional research. I guess my most significant concern is that, in its current state, the actionable insights from this work are not so clear to me. For example, the knowledge gathered is not sufficiently deep or detailed that a developer could use it confidently to make better choices about the CI/CD pipelines that might work best for them, or when their needs have changed enough that it is time to consider taking on the overhead of evolving their support base. Additional work will need to be done to produce the actionable insight. Of course, you have to start somewhere. This seems like a reasonable starting point.


Detailed Comments
[TBD]

----------- Strengths and weaknesses -----------
+ Interesting, relevant, timely topic
+ Very well-written, informative, and well-organized paper
+ Generally well-executed methods and studies, which provide some confirmation support to prior published work and identify some novel insights

- Potentially limited impact of these results

----------- Questions to the authors -----------
1. I was interested to see, in section 3.1, that the range of CI/CD experience claimed by respondents in the survey study was 1-20 years. Humble and Farley's book was published in 2011 (7 years ago), if I recall correctly; they also had a relevant paper published in the Agile Conference in 2006 (12 years ago), the same year that Martin Fowler first blogged about CI. I'm sure that some of the CI/CD practices pre-date the paper, perhaps by quite a bit, but 20 years ago, most people were still widely practicing waterfall and other types of top-down development and delivery, not agile methods, and continuous deployment was not a goal at that time. Can you say anything about what respondents meant when they claimed 20 years of experience with CI/CD? I'm not really sure how to interpret it.

2. Did the authors examine project characteristics for impact on the results (e.g., number of committers, development languages used, etc.)?

3. On the initial coding of the survey responses: section 3.1 notes that one author was involved in the coding. Did validation occur (e.g., by having a second author, or other capable coder, independently code some of the same data and check the inter-coder agreement)?


----------------------- REVIEW 3 ---------------------
PAPER: 171
TITLE: One Size Does Not Fit All: An Empirical Study of Containerized Continuous Deployment Workflows
AUTHORS: Yang Zhang, Bogdan Vasilescu, Huaimin Wang and Vladimir Filkov


----------- Summary -----------
This paper reports on an empirical study conducted to explore containerized continuous deployment (CD) workflows. The study was conducted in two phases: first, more than 150 developers were surveyed online. The survey identified two typical containerized CD workflows: based on the automated build feature of the Docker Hub container repository (DHW) and based on features of continuous integration services, such as TravisCI, Jenkins, and CircleCI (CIW). The survey results were also used to generate hypotheses about specific characteristics of DHW and CIW workflows, such as complexity, stability, release frequency, etc. These hypotheses were statistically validated using data collected from 1,125 open-source projects from DockerHub. The results show that (a) CIW has a higher release frequency, shorter build latency, and more build errors than DHW; (b) in both workflows, image build latency and image stability tend to increase over time, while the release frequency tends to drop!
 ; (c) there are observable differences between DHW and CIW but no notable differences within CIW workflows, i.e., between TravisCI and CircleCI builds.

----------- Detailed evaluation -----------
The paper is very well written, clear and easy to follow. The applied methodology is thorough and the results are analyzed in details. The survey questions, scripts, and data are available online for replication. 

However, the paper has a few weaknesses. First, it is somewhat low on new and actionable insights. In particular, the results in Section 3.2 (Motivation for doing CD) do not provide any new information on CD and are not specific to the containerization scenario. Why was this question needed in the context of this study? 

I also do not quite see what the reasons behind the findings are, e.g., why CIW has more build errors than DHW. It would be great if the paper could delve deeper into such topics, perhaps by conducting more focused interviews with the developers. 

That would also help derive actionable outcomes for researchers / developers. E.g., should developers prefer one workflow to another? The paper does not provide such recommendations. Section 5.3 does discuss practical implications of the findings, but they are mostly straightforward and do not seem to be directly derived from the study results, e.g., “simplify Dockerfile content” and “optimize image structures”. 

I do not immediately see how some of the hypotheses, for example, H4-H8, follow from the findings of the survey. The “Practical Differences” section (Section 5.1) also does not seem to directly follow from the results of this study. 

The “Unmet needs” discussion (Section 3.4) is based on opinions of only 9 developers. That seems to be a too small sample to reach meaningful conclusions. 

A relatively minor point: In the very first sentence of the intro and the related footnote, the authors say that they use “continuous deployment” and “continuous delivery” terms interchangeably. I wonder why they do not use a more precise terminology (which the authors without a doubt are aware of, as evident from the footnote). Also, were the considered workflows, in fact, part of continuous deployment or continuous delivery?

To summarize, I would suggest the authors explore the identified statistical findings in more detail and delve into reasons behind each finding.

----------- Strengths and weaknesses -----------
+ Thorough methodology 
+ Detailed analysis of results 
+ Well-written and easy to follow

- Low on novel insights and actionable outcomes
- No deep analysis of reasons behind statistical observations 
- Some conclusions do not directly follow from the study

----------- Questions to the authors -----------
1) What is specific to the containerization scenario in Section 3.4?
2) Please explain how hypotheses H4-H8 follow from the findings of the survey.
3) Can you classify the analyzed workflows to either continuous deployment or continuous delivery?


-------------------------  METAREVIEW  ------------------------
PAPER: 171
TITLE: One Size Does Not Fit All: An Empirical Study of Containerized Continuous Deployment Workflows

The program committee thanks the authors for the additional information provided during the rebuttal process. We have agreed that this paper should be accepted.
回复 ︿ (3)
  • 用户头像
    张洋 7年前

    <p> 之前ICSE被拒主要原因,供大家参考: </p> <p> 1. Intro部分冲突不够,少了很多数据支撑,评审容易质疑你的背景出发点是否有意义 </p> <p> 2. 文章里没有明确研究问题,评审容易提出质疑 </p> <p> 3. 研究的意义和贡献,包括实际价值没有讲清楚 </p> <p> 4. 模型存在小的问题,虽然评审没有发现 </p> <p> 5. 存在很多冗余的信息 </p>

  • 用户头像
    张洋 7年前

    <p> 几点小经验: </p> <p> 1. Intro部分通过一些实际的数据、统计信息、先验知识制造好冲突(基于尹老师的建议) </p> <p> 2. Intro部分交代好具体研究问题、意义和贡献(基于王老师的建议) </p> <p> 3. 实证研究最好提供分析和实验的相关代码、数据 </p> <p> 4. 实证研究最好定性(问卷调查等)、定量(回归分析等)相结合 </p> <p> 5. 回归模型要选取准确,不同类型的参数需要仔细处理(例如重复显著性检验问题),对于回归结果最好有很好的解释 </p> <p> 6. 文章最后最好给出对研究者、开发者等不同受众的建议,体现实际意义 </p> <p> 7. 一些结论性的语句最好用框圈起来 </p>

  • 用户头像
    张洋 7年前

    基本情况:论文是3月9号投稿,第一轮返回意见是5月7号,当时三位评审给出的成绩是221(2个强收,一个弱收),5月7号~10号进行rebuttal,6月11号出最终录取结果

0?1470885445
登录后可添加回复
15582?1508979994
指派给   陈晓婷
发布时间: 2018-06-12 08:29
更新时间:2018-06-12 15:10

如题,只有英数,现在有些账号“.”特殊字符,导致项目版本库无法访问


重现:在输入正常用户名点提交,页面加载时再更改用户名,点提交,后台会保存修改后的用户名

回复 ︿ (2)
  • 用户头像
    陈晓婷 7年前

    状态新增 变更为 已解决

    % 完成0 变更为 100

  • 用户头像
    童莉 7年前

    描述 已更新。 (查看差别)

0?1470885445
登录后可添加回复
25358?1536738032
指派给   陈晓婷
发布时间: 2018-06-12 09:50
更新时间:2018-06-12 15:09
回复 ︿ (1)
  • 用户头像
    陈晓婷 7年前

    状态新增 变更为 已解决

    % 完成0 变更为 100

0?1470885445
登录后可添加回复
15582?1508979994
指派给   黄井泉
发布时间: 2018-06-12 08:35
更新时间:2018-06-12 08:35

我创建了一个测试库,显示Git库的地址是xxx/xphi/xxxx

但是去到Git中看实际上是xxx/sunbingke/xxx

回复 ︿
0?1470885445
登录后可添加回复
15582?1508979994
指派给   陈晓婷
发布时间: 2018-03-20 10:20
更新时间:2018-03-22 14:59

老师视角和学生视角请统一调整。否则学生本来被1个学生匿评了2条记录时,他以为是被2个人匿评的。

https://www.trustie.net/student_work?homework=8118

回复 ︿ (2)
  • 用户头像
    创新使者 7年前

    描述 已更新。 (查看差别)

    状态新增 变更为 已解决

    % 完成0 变更为 100

  • 用户头像
    胡莎莎 7年前

    描述 已更新。 (查看差别)

0?1470885445
登录后可添加回复
15582?1508979994
【任务】 匿评相关功能的完善 正常
指派给   陈晓婷
发布时间: 2018-03-19 17:07
更新时间:2018-03-22 14:56

 1.匿评申诉的消息,在小铃铛的数字提示中缺失,即该类新消息没有小铃铛数字提示。请补上

2.作业的评分回复(含匿评的回复),需要增加消息提醒:

您的评阅有新的回复:XXXXXX

回复 ︿ (1)
  • 用户头像
    陈晓婷 7年前

    状态新增 变更为 已解决

    % 完成0 变更为 100

0?1470885445
登录后可添加回复
15582?1508979994
指派给   陈晓婷
发布时间: 2018-03-21 16:52
更新时间:2018-03-21 18:04

自己的

别人的

回复 ︿ (2)
  • 用户头像
    陈晓婷 7年前

    状态新增 变更为 已解决

    % 完成0 变更为 100

  • 用户头像
    胡莎莎 7年前

    描述 已更新。 (查看差别)

    优先级正常 变更为 立刻

0?1470885445
登录后可添加回复
15582?1508979994
指派给   陈晓婷
发布时间: 2018-03-21 17:24
更新时间:2018-03-21 17:50
回复 ︿ (1)
  • 用户头像
    陈晓婷 7年前

    状态新增 变更为 已解决

    % 完成0 变更为 100

0?1470885445
登录后可添加回复
15582?1508979994
未开启匿评作品补交中
语言: C++
提交截止时间:2017-03-31 16:00

本题为选作题目,但建议都尝试完成!

1.任务
实现和不超过100位的超长整数加法。从键盘输入任意两个不超过100位的十进制超长整数(和不超过100位),输出相加的结果。
2.提示
用字符数组存储两个长整数,模拟竖式加法,可考虑联合使用整除“/”和求余“%”实现进位加。

3.  示例:

输入:

123456789987654321123456789

987654321123456789987654321

输出:

1111111111111111111111111110


输入输出样例:1组

#1
  • 样例输入:
    1
  • 样例输出:
    1
迟交扣分:0分
匿评开启时间:2017-04-07 00:00
缺评扣分:0分/作品
匿评关闭时间:2017-04-14 23:59
回复 ︿
0?1470885445
登录后可添加回复
点击展开更多