- Company Name: CrowdWorks Inc.
- Number of People using: approximately 30
- Interviewees: CTO Akira Yumiyama, Engineer Hideki Igarashi
Towards a team where each person can easily make improvements
Today, we visited CrowdWorks Inc., operating company of "CrowdWorks", one of the largest crowd sourcing sites in Japan, with more than one million users. At CrowdWorks, having many successful results in public offices and listed companies, a wide range of registrants are working, starting from professionals such as engineers and designers to seniors and homemakers.
CrowdWorks Inc.'s develop team has about 30 people, upholding "Don't stop to ask for permission, apologize later" as a motto. After being listed in 2014, they continue their development, putting effort in code quality in accordance with member growth. To establish a good environment, they made participation in meetings more open, and made a #develop_kikeruka ("kikeruka" means "making it easier to ask" in Japanese) channel on Slack to make a environment that people can ask questions more casually, leading to a team where communication and improvements by each person can be done more easily.
Making productivity at the phase of sustained growth 1000 times better
- Please tell us why you give much weight to code quality
Mr.Yumiyama Because we have a big mission, to keep on providing the CrowdWorks service, we strongly value keeping our codes maintainable.
Different from disposable codes, where you just finish writing it and that's it, keeping providing, and constantly improving makes a service valuable, so there is a need for high quality, maintainable codes.
I'm sure there are many cases in comparing codes written and implemented with good foresight and ones directly opposite, being too complex, but I think they bring 100 times, 1000 times, or maybe even more difference in productivity when extending functions.
When we talk about the design of codes being bad, it is not just about that particular section, because bad codes, trying to make up for that bad design, keep being added. Codes like this, becoming more complex, become harder and harder to understand, making costs for service improvement skyrocket.
And at some point, you obviously realize "okay, this is bad...", but making that easier to see is very costly as well.
Mr.Igarashi: I have often seen technical debts being newly produced in the past.
At CrowdWorks, we do code reviews between engineers, but it produces communication costs, because indications and ways of thinking are different person to person. The fact that things like that could be done not only from people, but from a tool, making communication easier with that third perspective feedback, is really good.
I strongly think that codes should be clean, so I expect that thinking about the overall picture and keeping the design clean will contribute to current phase of CrowdWorks, where sustained growth is valued.
Mr.Yumiyama: Where to place emphasis on, and what to consider as higher priority, such as prioritizing releasing in minimum periods, considering the future and incorporating functions in advance, and keeping the code simple, changes with the phase of the service. We think that CrowdWorks is at the "sustained growth phase", being listed and the number of engineers growing.
At the "sustained growth phase", keeping the codes maintainable, legible, and easy-to-understand is very valuable.
Measurement of code quality and test coverage alone did not make the codes cleaner
- What kinds of efforts were there in the past?
Mr.Yumiyama: There were times when we used tools to measure the quality and test coverage of the codes, but the changes in results of the measurements didn't lead to improvement.
We thought there were 2 reasons for this.
One is because the tools' indications were done in a different place than the usual reviewing cycle.
The second is because many cases were seen where the indications of the tools were not appropriate in regards to the states of the actual operation.
With the previous tools, it was troublesome to go check the contents, since all it left was a x mark on GitHub, where usual communication was done. So, many of us just didn't bother to check it. (sample image below)
GitHub status notification
In addition to this "hard-to-see" problem, there were many cases where the indications were appropriate, but were hard to deal with in the present condition, so it did not lead to effective feedback. Just like the "broken windows", we were in a state where the indications were not considered valuable.
Lets display the results of code quality measurement on the pull requests with SideCI
- How did you deal with that problem?
Mr.Yumiyama: We were conscious about the problem for a long time, but didn't quite get to dealing with it. We spoke with a few engineers, and finally chose to do something about it and started our efforts in solving this problem. We have a system called "Engineer Club Activity", stating when 3 or more engineers gather, they may use up to 10% of their work-time freely for activities such as improvements.
Mr.Igarashi: What I think is good about these activities is that you can keep working on improvements without asking for permission each time. The keyword "Don't stop for permission, apologize later" is a frequent appearance in our company.
At first, there were a few people who knew SideCI, and we heard from them that SideCI checks the codes and leaves the comments on the Pull Requests on GitHub, so we started off introducing it to an in-house daily bulletin system's repository.
Since we do our usual code reviews on GitHub, we sought to incorporate it into that cycle.
As we tried it, a ton of indications appeared(laughs). We found out that we will need a lot of adjustments before introducing this to the Crowdworks repository.
Mr.Yumiyama: By introducing SideCI to our daily bulletin's repository, we found out that this isn't something you could just simply introduce and be done with. That was very good information. We made up our minds to properly adjust what we want as rules concerning quality. As for not being in sync with the review cycle, we could image it working properly with the comments being made on pull requests.
Discussing coding rules with comments on Pull Requests
- Did the attempts in putting comments in work well?
Mr.Igarashi: In transferring from the previous measurement tools to SideCI's code reviews, first, we applied SideCI with the commenting on Pull Requests disabled. We chose to consider which rules were appropriate. Because there were some rules which SideCI checked for that were too strict in regard to the current state, we started off by excluding those.
Mr.Yumiyama: Preparations for full-scale introduction took about 3 weeks, checking the way to go in meetings once a week, and proceeding with the work as everyone's homework.
We were aware of a issue that coding styles were varied too widely, so first we used RuboCop's automatic correction feature and made modifications all together, to decrease variance in coding style.
Mr.Igarashi: After we finished narrowing down which rules to apply, we configured so the analysis results were commented on the Pull Requests.
As the comments appeared on GitHub's Pull Requests, where we usually discuss about codes, modifications were made based on the indications, and Mr.Yumiyama and I started receiving messages on Slack like "Isn't this rule too strict?". The indications were not left as they were like previous tools, and it seemed that knowledge about the rules and common understandings were spread through the discussions.
Mr.Yumiyama: There were many discussions about the context pointed out at the start, but as we discussed and applied them, we have come to make the modifications as soon as they are indicated. It seems that the current rules became wide known between our members.
Increasing value with easy-to-understand codes
- Please tell us about future work and efforts
Mr.Yumiyama: There are still rules that are not currently applied, but think we should. As a extreme example, there may be times when the rule "keep methods less than 5 lines" comes in handy, but there will be cases where we need to sort out on which occasions it is appropriate, and which occasions its not.
Also, since there are so many pre-existing codes, it's hard to introduce the rules all at once. To introduce them gradually, we need to make extensions to the tools so that we can finely adjust the rules and think about the balance so the business speed doesn't fall. These are points we want to work on in the future.
Mr.Igarashi: Out of the codes where the quality is going down, we want prioritize fixing the core parts, the parts where we use most often.
Also, when a circumstance where pre-existing and revised codes are mixed occurs, members get confused on which style to follow, so telling what modifications are being done will be one issue.
As a case of quality getting worse, there is one that unifying coding styles so that it looks as one, but when that is done flat out, the bad coding styles are also inherited. Making a flow where the bad parts are properly revised would be necessary.
Mr.Yumiyama: To keep on providing and improving services, legible and easy-to-understand codes have a great value.
There are many viewpoints in being "legible". For example, considering writing multiple methods with similar structures to be more "legible". This is fine, but if pre-existing methods are becoming harder to read, there may be cases where intentionally implementing it with a different structure is more effective.
Indications of this level are difficult to achieve with static analysis, but it would be very nice if tools evolve, and indications like these could be automated.
At CrowdWorks, we were told about the importance of code quality, and knowledge about their efforts on it. We fully understood that the process of following the same rules among team members, and agreeing to it is very important in improving code quality.
To keep providing better values, organizing pre-existing codes and maintaining them is very crucial. This was an interview where we could feel how much CrowdWorks Inc. value putting effort in continuous improvement and investment in maintaining clean codes for a better service.
Signup and try Sider for free in just 30 seconds!
You can try 14 days for free.