Home < Bitcoin Core Dev Tech < Bitcoin Core Dev Tech 2018 (Mar) < Priorities

Priorities

Date: March 7, 2018

Transcript By: Bryan Bishop

https://twitter.com/kanzure/status/972863994489901056

Priorities

We’re going to wait until BlueMatt is here. Nobody knows what his priorities are. He says he might be in around noon.

There’s an ex-Google product director interested in helping with Bitcoin Core. He was asking about how to get involved. I told him to get involved by just diving in. He will be spending some time at Chaincode at the end of March. We’ll get a sense for what his skills are. I think this could be useful for managing things. It would be useful if he would write meeting notes, or find someone to write up meeting notes, making sure it happens, just organizational stuff. Keeping track of priorities and see where people are working together and say “you guys should be working together”.

A technical writer is very different from a manager type. This guy would probably more be the type to find a technical writer, than to do the writing himself.

There’s 600 issues open, 200 pull requests. Someone to keep track of this would be helpful. Keeping track of what’s ready for merge would be very helpful. We’re not doing so good on following up on pull requests. We could have osmeone go to the person who opened the issue, and ask them, do you still think this needs to be open still? If it was rc3 some few versions ago, with some weird version of qt, those kinds of things, after some amount of time, just close those, and leave 100-200 current issues open. This requires some technical knowledge though because they need to be aware of the technical problems and how pull requests are helping those issues.

How much stuff is going to be rebased by merging something? So we need some help with ordering pull requests. Just adding some sort of tool that shows the conflicts and what will be needing rebase– here’s 10 PRs ready for merge, if we do this one merged first, who else is going to be impacted. What is ready? Better tools around which pull requests are ready. Having some tags to indicate that would be a useful idea. Github has a good API and you could do REST queries to pull out which PRs have which tags. These tags could help newcomers to know that some PRs are just sitting and need some little amount of work. Sometimes you have a new contributor– in an example, a new contributor was messing with a big file, and it’s a lot to tell a new contributor to deal with, so it was sitting there for quite a while. New contributors don’t even know to look at documentation. Often you have to ask people to squash their commits.

What about having people with responsibility where someone within the first day or two of a PR, someone has to say concept ACK or that this needs concept ACK or something. Doesn’t have to be as strong as an ACK, either, just that something isn’t completely hated. Say someone wants to add some RPC or change some existing– some categories of this are totally benign and people don’t care, and other times there’s design considerations that people balk at. But sometimes it needs motivation for priority because there’s other stuff. Or you could just link people to a FAQ that explains why you don’t want to do these things. You also don’t want to make the barrier to entry too high.

We used to have priority labels, but the problem was that nobody would manage them. So you had “high priority” issues from 5-6 years ago that still had priority. So it wasn’t useful. We ask every week on IRC about which ones are high priority, and even those don’t get the attention that they need. I’m not sure how much it would help to go back to the labels. It’s a lot of layers of work to the project. Also, people might not feel responsible for “oh I owe a review or a comment on whether this is a good idea or not”. I’m very guilty of this myself, too. Tagging issues with priority might not help. … But maybe nagging would work. ((laughter))

Someone should write a bot that writes on every pull request asking “is this still a thing” and then at some point close it. “Are you feeling stressed? How about now?”. or “Rate your satisfaction about this pull request”.

For certain parts of the code, the bot should know who to notify or mention. We could use a maintainers file with file paths and who are the maintainers for those files.It’s not a maintainers file, it’s a NAGFILE. Github will automatically suggest reviewers based on a file.

Say you had a group of people paying attention to a thing.. if people ACK it from that group, is it going to get merged, or does nobody else review it for 6 months? People need to tell wumpus it’s ready for merge– and how clear is it that it’s ready? Maybe it’s dangerous consensus code. Which things are going to need higher level of review? What sign-off is required?

Mentions get starred emails, with a filter on email inbox. Some of the people seem to be using the github UI. Maybe mentions need a reply, like a mention ACK.

Can’t use git blame for nagfile construction. Who do you nag? Who nags? Github has a reviewer suggestion thing. Phabricator has a watch list where it notifies users when a certain file gets changed in a pull request.

You could use a size label to indicate how big the PR is.

It’s not a “lack of process” for “who should review this code”. There seem to be implicit rules where people who are merging seem to have some criteria, even if it’s not formally written somewhere. There’s some threshold above which I say “okay I think I can merge this” and it’s not clear what this threshold is to newcomers. Can’t have very strict rules, like “for this piece of code, you need 4 reviews from people whose voting rights sum up to…” or whatever. Or you could show someone a timeline and say within 3 weeks, someone will look at this and give an updated estimate. There’s no way to guarantee that anyone will do that, in that timeline.

We could put design notes on the developer wiki, where we put release notes. There’s no risk of things getting lost there.

Rather than telling contributors this is the process, we could say here’s an example of what has happened in the past for other requests, but here’s roughly what this looks like. We could give historical examples of how PRs have gone. You could also list an “ambassador” for who to talk with about the general process or whatever. But it’s not someone that is said to have the ability to finalize things, just someone to socially help the user out.

There should be an FAQ for the cultural norms and what’s going on, and historical examples. Also, not many people seem to be using the “-0” response. Sometimes the PRs linger because nobody wants to get into a fight with the author.

It’s basically a full-time job for someone to have an overview of all the open issues and PRs. It would be helpful to have a person who is generally aware of what those things are, who those people are, what they are working on, just be aware of what issues are open, and then knowledge that this closes some other tickets. The issues would then be more useful. Right now I assume that half of the issues are already fixed. Or knowledge of what has come up in the past or what has been closed, or people who have opened similar issues, or something. Is it better to have a single person who has that overview? Or a few people who have that responsibility? It might be unrealistic to have a single person doing that, but I think we should try it. Also, they should ask developers whether it’s impactful.

If you use auto-close, then say “If you think this was closed in error, then please notify someone” and they could re-open it as people continue to work on.

Let’s go through the pull requests and talk about them in person. Maybe just with a focus on concepts.Everyone should look at their open PRs and come up with a plan for it, and ask people while you’re here to review those. As a reviewer, there’s no way I’m going to get through all of this and it’s actively discouraging.

Rebase-needed needs “rebase needed and then merge”- so that rebase will be guaranteeing a merge. A dead graveyard of cool ideas, that already has a tag. Are people looking at that page though?

If something needs review and doesn’t get any, then randomly assign a contributor who becomes responsible to review or to assign someone else. It might become hot potato. You get a potato, and then you can only toss it once. If someone is overloaded, they should say they are overloaded.

We need people to commit to doing 3-4 reviews/week. The problem is a lack of qualified focused review. We should not waste review work. If we do concept ACK and one or two reviews… before those reviews go stale, then it should be considered priority. There should be commitment to focused review on things that are actually tagged “high priority”.

What about an “almost-ready” tag? If there is someone with a job to maintain those tags, then fine. An “almost-ready” tag for your PR, would be you have 2 acks, people seem to have another one. Concept ACK could be replaced by an issue, concept ACKs should only be on issues…. no that would create too many issues. Code sitting and rotting getting Concept ACKs is a problem; might be a problem to have that.

You should update your opening comment on the PR. Nobody should go through the full 150 comments. You need to update your OP.

“High priority for review” needs to be managed and updated. We need people reviewing those things every week. People have tried to do processes for this. The process has existed. There have been previous ones. We need commitment to do the review. The solution is commitment, not more process. People need to follow the process.

The “high priority” list should not be auto-accept. It should be actual high-priority stuff. You need someone to follow-up and get people to do their reviews. The idea of this during the meeting… what is the priority of the project? High priority was introduced for blockers, because people split their work into their pull requests, and some of them need to be getting in there. There should be some agreement that they are high priority. The name could be changed to “blocking issues” or something.

If you want your PR to be in the high priority list, then you should be reviewing other people’s PR review. Maybe eye-for-an-eye where you review my PR, I review your PR.

Should the time alternate for the meeting? So that it has full world coverage.

Actually on the list at hte moment is– networking stuff (opening for 2 years or something) but it’s getting done; external wallets; introduce banman; RPC; coin selection stuff, couple of Matt’s PRs which are rewriting the interface between validation and net processing in regards to DoS; and some wallet work which looks ready to go today as well. There’s GUI stuff, wallet stuff, networking stuff. Can we get all of these merged by the end of tomorrow? We could finally get the banman. Let’s focus on these for the next 2 days, and if it doesn’t work then maybe we will learn about why it didn’t work.

Are things removed off of the high priority review list? The intention was that if there was– if it has been reviewed and actually needs changes before it gets merged, then it would be removed at the next week’s meeting, but to some extent this hasn’t happened. Often it needs rebase, then it gets a rebase, it’s not removed, but then it still needs rebase… I think we need to fix this. You’re just going to learn not to use that list if you go there and see something that keeps needing a rebase… then it encourages you to stop using it. If you don’t rebase and don’t do anything with the feedback, then it makes sense to drop it off the list. But if you were busy for 3 days, then it should go off the list for 3 days. If we’re going to do that, then you might as well drop it off the list as a courtesy.

Review bounties funded by the community? The review fairy would pick who is the three best reviewers for the week, and they get the money.

Development priorities next 6-12 months

What has Matt been working on?

One of the conclusions yesterday was to get things off of the high-priority list faster. Also, maybe a nagfile for which areas of code correspond to which people to nag. There’s a facebook bot that looks at git blame and then mentions people.

The goal for now is to talk about bigger scale development priorities. Matt has some stuff he’s working on. Cory too. Ryanofsky too. Maybe you could explain what you’re trying to accomplish, what help you need, or whatever, and if there’s any prioritization to do among those things.

Cory stuff: I have a few things going in parallel. There’s the net refactor. I think it’s almost done. That’s coming. I’ll probably be… I slowed down a whole lot this last cycle because it was a feature release. I am going to be asking for more reviews. In parallel, I will be reviewing more stuff too. I have a toolchain project that I’m working on. This is gitian style stuff to get part of gitian going away. I want to get it to a state where I can put it out in public, before I push it out there, so I’m recruiting some help. I have a UTXO hash thing that I have been socializing. It’s a bit too early for this to really discuss it and to throw it out here. That’s going to be my immediate project. It’s an interesting feature, and I’ll be describing it on the mailing list soon. I will probably be trying to recruit help on the toolchain stuff. I need to do a rebase, I need some reviews on that. There are some other PRs that are more long-term, and I’m taking chunks out of that, and PRing those slowly.

ryanofsky: I have a lot of PRs open for review. Most of them are fairly small. I am working on two projects. One is the account label renaming refactoring thing to get rid of acccounts and replace them with labels. There’s one PR, it has a few ACKs, it’s very mechanical, but kind of big. If we can get that in, there’s another chainstate that Wladimir has that I have rebased on, that should complete the accounts label rename and getting rid of accounts. Wladimir’s is old now. The PR is 11536. It’s a straight rename. Wladimir had a big PR, and split a chunk off, then rebase it on top. That would be nice to get in, especially since we just had a release. I also have been working on process isolation and separation where GUI and node are running separately. There are 3 PRs there. Two of them are kind of similar to the account label rename. They are kind of large and these kind of refactor mechanical changes. These are just refactoring changes, that basically look for all the places where the wallet is accessing the global variables of the node, and replace those with calls, and same with GUI for accessing global variables for the wallet or the node, and these are refactoring changes. These PRs are open and they are ready for review. I am trying to get interprocess communication into better shape, I have been trying to clean up the code. This is process splitting. There might be disagreements on how to do the process splitting. Wladimir’s idea is the later PR. But the refactoring to separate the code from each other and other parts, people need to look at the code and the changes there. I think we agree at a high-level that the wallet code, like, you shouldn’t have… wrapper on a wrapper or whatever. Having a clean separation between the wallet and the node is generally a good thing, even if it doesn’t result in separating it into processes. The GUI can be made more asynchronous. The idea is that should we make the GUI more async, or is it okay to do this without doing that first? I had not made any changes to the way the GUI functions. I just made it so that you have an option to run it in a separate process and make synchronous calls. Until you split it out into a separate process, it’s not going to be any less responsive, since it’s just going through an interface. Wladimir thinks it should be made asynchronous first before splitting it out. We need to clean up the internal interfaces. The first two PRs just define the interface. The GUI code calls into the connection manager right now. The actual change there, out to review, are simple mechanical things where you can see the calls the GUI is making into the node, and into the wallet, it makes it obvious what is being called and where cleanups need to happen. Right now the code is just a big ball of mud and you need to know what is being accessed. The concern is that if the GUI is synchronous and we use IPC then it will be even slower. That’s the concern. The build default is to still use a single process. You could flip that flag but it’s not the default, in the PR.

BlueMatt: What have I been working on… my PRs have a bunch of stuff which is trying to continue the work of splitting main. There’s a bunch of stuff about splitting validation further. We have a dummy class at the top of the validation which is making things slower, so I’ve been using that, and trade off the loss of performance for something else. By cleaning up validation into two primary parts, the first being the actual engine which is validation and then it’s kind of the thing exposed in validation so it holds the locks and all that garbage. And the storage backend, which is flush to disk, read block from disk and all that stuff, and hopefully splitting out all the locking between those two things. I want to stop holding cs_main while reading blocks from disk, which would be nice to do at some point. That’s the ihgh-level goal for validation. Also, splitting out the processing… which I’ve got some pushback from Suhas on that where, so, there’s one open right now which would be interesting t oget feedback on. Right now we have this find next block to download, and one of our most critical functions for maintaining consensus for peers, and it uses a lot of net processing speciifc data structures to do its job, and moving some of that into validation, I think would be a useful goal, and essentially that interface will turn into “hi validation, i have a peer with a tick I believe to be X, should I download something and if so what” and it could be non-specific to our p2p protocol but it requires tracking a candidate header set. I have code that does this, but it’s kind of in the heart of validation and kind of complex to maintain correctly. I would appreciate feedback on that. As validation gets cleaned up more into cleaner interfaces, you could do stuff like, one thing I want to do is make cs_main a … lock. So that needs, I think there’s a PracticalSwift PR called look up block index hwich needs review. And I’m really close to making all the seed block indexes const, and also cs_main could be more inside validation, and then cs_main could be used before the few things in write mode inside of validation for the most part can be written…. Shared locking and exclusive locking. Most of our threads don’t touch cs_main in terms of writing to it… most of our threads, we only have a few threading concepts like workers for HTTP and RPC and whatever else, which just takes cs_main so it can read some data structure and that’s all straightforward. And then we have net stuff which takes cs_main in write mode because it just has to. We don’t have that many threading models. And then wallet stuff.

Should cs_main be split up into separate locks, to make it more context specific, rather than making everything read-write. It might help. Splitting it up is harder. The other– right now, cs_main is primarily used for three-ish different things. It’s used to protect the main stuff like chain active, block index, those things, it’s used to protect the mempool, but that intermixes with chain active. And then it’s used for CNodeState which is like crazy… but deconnecting that is super complex, there’s been a few attempts at it, I want to take another shot at it. Having two CNodeStates for now might be the right way to go. Doing it some other way is a bit more painful. And then you can start to take out cs_main, but that’s a very long-term project. This is the only candidate for taking cs_main and splitting it between mempool and whatever..

Giving the mempool its own state, that was an idea I used to support, but it’s really hard. What to do about the UTXO set state? Your UTXO set that you validate and confirm transactions against, will grow inconsistent with your mempool. You validate the unconfirmed transaction, first, … and you can’t add it to the mempool I guess. If you add it to the mempool, a block comes in, a conflicting transaction is removed after that, then you’re fine. You don’t need much UTXO state from the mempool. We already have code that pulls it into a dummy and releases a lock. You still have to synchronize it, though. You get into a state where you have a transaction, you looked it up with the current UTXO set, you queued it up for validation, a block came in, it queued up a bunch of events…

We want to clean up code. Cleaning up validation so that the validation engine as it were, the thing that does all the validation and connection and disconnect, is a single thing with a backing disk store and a frontend that you shove things at. This makes a lot of other things easier, like this change from PracticalSwift to do block index lookups via a function so that block index is not released to the world; once you have that, you could easily change it. The term you are looking for is isolation. Encapsulate validation is the first step. Encapsulation p2p logic is a parallel step, and is much harder most likely. That uses CNodeState and CNode and consensus structures. We’re going to fix this soon. Cory and BlueMatt have continued to not agree on how to do that, and we will continue to not agree for a long time about how to do that. But yes, that part is fixable. CNodeState dpeending on everything else, is a huge pain in the ass and an incredibly invasive change. I’ve done that in three separate branches, every time it works but it’s terrible.

I want to be able to use Bitcoin Core only for tracking addresses or keys. Most likely your keys are stored in a hardware wallet. I did a PR for this but we find out that it’s not what we wanted. The PR I did was working but every address generated were considered a watchonly address, and that’s not what we want. We want ismine keys, even if we don’t have the private key. We want ismine for fundrawtransaction and other things. I want to separate ismine from having the private key. sipa has written this up, a separate ismine.

Almost everything in the code, should not care about whether you actually have the private key for something or not. Making that separate “ismine” mode is exposing the knowledge of whether the private key is actually there, to everything, while it should go the other way around. We have the concept of watchonly, and watchonly means that it’s ours wihle not having the key, but it’s more appropriate for things like, this is a key tha tparticipates in a multisig and I don’t actually have access to it, because it’s actually my friend’s key, but I want to see those transactions that effect it anyway. The watchonly concept is useful, but it should be orthogonal to whether we have the private keys. I have done this writeup that I plan to work on.

sipa: I have a few lower priority things like signature aggregation is one of them. p2p encryption is one of them. I think my priority now, in Bitcoin Core specifically, will be the wallet.

morcos: I’m interested in the project doing, and potentially I could do this myself, is without talking about consensus changes, making it more scalable. Faster performance things, potentially different security models, we’ve talked about starting up in an SPV mode and backfilling. I like the idea, but assume UTXO startup so that you don’t have to do IBD which also has a backfilling option. These things take a lot of thought about the right way to do it and what it means for us to put it forward and the precedents about security models. I think these should be priorities because we might lose people running nodes, and we’re making it more difficult- I would like to put us in a technical position where if it becomes the case that we need to do a consensus change to increase things, that it is less of a technical issues. It’s a serious problem for how long it takes to do IBD, so let’s work on making all of that better. I think assume utxo is interesting. What should we do first on that? Commitments, then there’s various ways to synchronize the UTXO set, UTXO set compaction…

My dns seed crawler is seeing the highest number I’ve ever seen, probably most of them are in AWS, it seems to be 12,000.

Other than just running a node, it’s also important to use a node. There’s a cultural push to run nodes just to do so. But the real reason should be to be sovereign and independently validate transactions. I wonder how many wallets are actually run by or connected to a self-controlled full node.

Get a full node up and running in 5 minutes. With assume utxo, maybe that could be done. What about getting a full node synced on a laptop that is regularly going online/offline? What about a smartphone wallet that connects to your node, with an authenticated connection?

Install the wallet on your phone, QR pair with your node on your desktop. And then it’s ready to go wherever you go. You could also pair your SPV lite clients with specific friends, who could share a QR code for their home connection or home node. You could add more optional indexes.

Copy a UTXO set from one node to another. It’s not obvious how to do that. It’s easy to screw that up. Pruning mode is not fully featured enough to do that, there’s edge cases. It would be nice to get an authenticated string from an old node or a friend, and just sync.

“Assume UTXO” is an idea similar to assumevalid. In assumevalid, you have a hash that is hard-coded into the code, and you assume all the blocks in the chain that ends in that hash, that those transactions have valid scripts. This is an optimization for startup to not have to do script checks if you’re willing to believe that the developers were able to properly review and validate up to that point. You could do something similar, where you cache the validity of the particular UTXO set, and it’s ACKed by developers before updating the code. Anyone can independently recompute that value. There’s some nuanced considerations for this, it’s a different thing than assumevalid. The downside if you fuck up, is bigger. In assumevalid, you can trick someone into thinking the block is valid if it wasn’t, and assumeutxo you might be able to create coins or something there are definitely different failure modes. So there’s different security implications here. You could use assume valid utxo, but you could go back and double check later. But there’s nuance about how this should be done or how. You could have nodes that export this value, you could lie to your friend I guess, but that’s less scary than other things.

Hybrid SPV PR was not able to find a chain later to be invalid.. if you want to do hybrid SPV, you have to start by tracking candidate headers, which was the PR bluematt mentioned earlier. It exists, you’re welcome to attempt to review it, but I think that’s the only logical first step to hybrid SPV.