Page MenuHomePhabricator

[Chronik] Enable SLP/ALP index, add protobuf messages & fields
ClosedPublic

Authored by tobias_ruck on Jan 5 2024, 22:30.

Details

Summary
  • Enable the SLP/ALP index unconditionally
  • Add protobuf messages & fields for SLP/ALP
  • Add TxTokenData to read the required mempool/DB data and output the protobuf

Note that there's no ability to disable the token index, or reindex it stand-alone. This is because the specifics can be unwieldy; e.g. RocksDB requires us to specify the column families to open the DB with, we'll need to conditionally index as well as read from the DB. Also, we might want to add more indices or even a plugin interface in the future, in this case it would be better to have a generalized system to enable/disable/selectively reindex certain indices.

Test Plan

ninja check-functional

Event Timeline

The intent is mostly to allow a (bit of a) discussion about the Protobuf API, and perhaps for someone to run a test instance.
Note: The test currently is a huge pile of unstructured code accumulated over time; however, it already allows us to see how the interface would look like.

I think for this it would be a good idea to have a kind-of Behavior Testing, where we specify in Python how txs are made and how we expect them to look like, and then to run them in different environments (mempool, mined etc.).

The build failed due to an unexpected infrastructure outage. The administrators have been notified to investigate. Sorry for the inconvenience.

rename is_normal -> token_is_invalid, makes functional tests succeed again

The build failed due to an unexpected infrastructure outage. The administrators have been notified to investigate. Sorry for the inconvenience.

start splitting up tests, still missing:

  • all kinds of burns
  • parse failure
  • utxos, history

Failed tests logs:

====== /block and /blocks: "before each" hook for "gives us the block and blocks after unparking the last block"./block and /blocks "before each" hook for "gives us the block and blocks after unparking the last block" ======
Error: Timeout of 10000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/work/modules/chronik-client/test/integration/block_and_blocks.ts)
    at listOnTimeout (node:internal/timers:559:17)
    at processTimers (node:internal/timers:502:7)

Each failure log is accessible here:
/block and /blocks: "before each" hook for "gives us the block and blocks after unparking the last block"./block and /blocks "before each" hook for "gives us the block and blocks after unparking the last block"

Fabien requested changes to this revision.Feb 2 2024, 11:08

Still need to review the tests

chronik/chronik-indexer/src/query/group_utxos.rs
138

slp isn't a good name for this variable, what about token ?

chronik/chronik-indexer/src/query/tx_token_data.rs
101
121
132

TIL as _

138
159

out of scope, but this is_normal would benefit from a rename. Maybe has_unintentional_burns ?

EDIT: on second thought, why checking is_normal at all ? can't you just always include the burn summary ? it would tell there is no issue if this is the case which is better than an empty string imo

chronik/chronik-indexer/src/query/util.rs
118

the name is confusing imo, there could be unintentional burns but this doesn't make it invalid

test/functional/chronik_token_alp.py
44–53
test/functional/test_framework/chronik/alp.py
47

Make it a constant (with a comment)

This revision now requires changes to proceed.Feb 2 2024, 11:08
chronik/chronik-proto/proto/chronik.proto
193

does the proto not imply this field needs to be 0? wouldn't ALP_TOKEN_TYPE_STANDARD be something like bytes (vs = 0) if it were just passing the type along from the OP_RETURN?

200

why is this case not a BURN ?

215

where is this used? is there an endpoint that queries this directly?

ctrl+f TokenInfo in this diff just finds this line + 2 comments in chronik.proto

262

is this undefined for ALP? are the fields shown in these proto interfaces not required?

284

...at least, I think we don't support mind batons

300

does it makes sense for SLP/ALP/SLP2 to share the same proto types?

Even if most of the info is the same, seems like some fields are unique to just one type. Potentially as we deal with new token types, some could be even less suited to fitting into existing shapes here.

What's the case against TokenEntry being something like oneOf(AlpTokeEntry, Slp1TokenEntry, Slp2TokenEntry), even if the fields, for now, are quite similar?

chronik/chronik-proto/proto/chronik.proto
200

It is an unintentional burn vs a BURN command which is explicitly intentional

215
262

add bare burn test + apply some comments

chronik/chronik-indexer/src/query/tx_token_data.rs
159

I kept it previously, but it was quite annoying in the tests and usually just useless data if there's no unintentional burn. But I'm open to adding it back in.

There's already a function has_unintentional_burns, is_normal calls it but also checks if there's any failed colorings (which not necessarily generate any burns). "normal" implies there's nothing extraordinary going on.

It's not straightforward to find good names for it, maybe we can get some third opinions.

chronik/chronik-indexer/src/query/util.rs
118

That's true, there can be tokens moved in the tx and token_is_invalid could be true at the same time.

I originally called it token_is_normal implying no failed parsings and all token entries have is_normal, but this would've changed every test that tests a Tx (since normal XEC tx would have token_is_normal true).

Maybe it's best to set this to an enum, token_status: NON_TOKEN, NORMAL, UNINTENTIONAL_BURNS. That way the default (NON_TOKEN) implies no token stuff whatsoever (neither any burns nor any failed parsing/coloring).

chronik/chronik-indexer/src/query/util.rs
118

That would make sense

address joey's comments

chronik/chronik-proto/proto/chronik.proto
193

I'm not quite sure I understand, but let me try to explain:

We're using a oneof which is a tagged union (kind of like an enum in Rust), so we know whether it's SLP or ALP.

Both SLP and ALP have a concept of token type, which is just an integer (in both cases just a single byte), and we use a proto enum (~a named integer) to expose the integer to the user.

200

It's for the specific case of a "bare burn", where input tokens are burned but there's no corresponding OP_RETURN.

I've added a test for it, you can search for bare_burn in chronik_token_burn.py (note how the tx_type is unspecified i.e. NONE).

215

Good find, should I move it to D15369? This diff is already quite big

There's also a typo in endiannnes

262

It's kind of the union of all options, and if they can't be specified (like hash in ALP, or mint_vault_scripthash in everything except V2 Mint Vault) they're simply left blank.

The token_type definitively establishes which field is "the empty bytestring" vs "not specified", so IMO that's fine.

284

I wouldn't mind that!

300

I found in practice it's much easier to work with to have one unified type.

The original design actually had different types for everything and it was very annoying to always have to match on the different cases.

Think about implementing the explorer; you have to have big match blocks with branches that all basically do the same.

add token_status to replace token_is_invalid; also remove TokenInfo

Fabien requested changes to this revision.Feb 2 2024, 21:23
Fabien added inline comments.
test/functional/test_framework/chronik/alp.py
47

My comment was about the 6 bytes integer, not the endianess. "Little" is very explicit and doesn't need a literal :) But 6 bytes integer are specific to ALP

This revision now requires changes to proceed.Feb 2 2024, 21:23

add ALP_INT_SIZE and SLP_INT_SIZE instead of the endianness

This revision is now accepted and ready to land.Feb 3 2024, 02:11