suyeong-park commited on
Commit
576e868
·
verified ·
1 Parent(s): 59b8891

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -42,17 +42,17 @@ Benchmark dataset for evaluating vector database performance on financial news d
42
  | Split | Samples | Description |
43
  |-------|---------|-------------|
44
  | `train` | 368,816 | Training embeddings (80% random sample from source) |
45
- | `test_1k` | 1,000 | Test query embeddings (from remaining 20%, non-overlapping) |
46
- | `neigbors_1k.parquet` | 1,000 | Top-1000 nearest neighbors for each test query |
47
 
48
  ### Data Fields
49
 
50
- #### train & test_1k
51
  - `id` (int64): Unique identifier for each article
52
  - `emb` (List[float64]): 768-dimensional L2-normalized embedding vector
53
 
54
- #### neigbors_1k.parquet
55
- - `id` (int64): Query identifier (matches test_1k)
56
  - `neighbors_id` (List[int64]): List of 1000 nearest neighbor IDs from train set
57
 
58
  ## Dataset Creation
@@ -96,11 +96,11 @@ import pandas as pd
96
  # Load train and test splits
97
  dataset = load_dataset("redcourage/Bloomberg-Financial-News-embedding-gemma-300m")
98
  train = dataset['train']
99
- test = dataset['test_1k']
100
 
101
  # Load ground truth
102
  neigbors = pd.read_parquet(
103
- "hf://datasets/redcourage/Bloomberg-Financial-News-embedding-gemma-300m/neigbors_1k.parquet"
104
  )
105
  ```
106
 
@@ -113,9 +113,9 @@ import pandas as pd
113
  # Load data
114
  dataset = load_dataset("redcourage/Bloomberg-Financial-News-embedding-gemma-300m")
115
  train_data = dataset['train']
116
- test_data = dataset['test_1k']
117
  neigbors = pd.read_parquet(
118
- "hf://datasets/redcourage/Bloomberg-Financial-News-embedding-gemma-300m/neigbors_1k.parquet"
119
  )
120
 
121
  # Convert to numpy arrays
 
42
  | Split | Samples | Description |
43
  |-------|---------|-------------|
44
  | `train` | 368,816 | Training embeddings (80% random sample from source) |
45
+ | `test` | 1,000 | Test query embeddings (from remaining 20%, non-overlapping) |
46
+ | `neigbors.parquet` | 1,000 | Top-1000 nearest neighbors for each test query |
47
 
48
  ### Data Fields
49
 
50
+ #### train & test
51
  - `id` (int64): Unique identifier for each article
52
  - `emb` (List[float64]): 768-dimensional L2-normalized embedding vector
53
 
54
+ #### neigbors.parquet
55
+ - `id` (int64): Query identifier (matches test)
56
  - `neighbors_id` (List[int64]): List of 1000 nearest neighbor IDs from train set
57
 
58
  ## Dataset Creation
 
96
  # Load train and test splits
97
  dataset = load_dataset("redcourage/Bloomberg-Financial-News-embedding-gemma-300m")
98
  train = dataset['train']
99
+ test = dataset['test']
100
 
101
  # Load ground truth
102
  neigbors = pd.read_parquet(
103
+ "hf://datasets/redcourage/Bloomberg-Financial-News-embedding-gemma-300m/neigbors.parquet"
104
  )
105
  ```
106
 
 
113
  # Load data
114
  dataset = load_dataset("redcourage/Bloomberg-Financial-News-embedding-gemma-300m")
115
  train_data = dataset['train']
116
+ test_data = dataset['test']
117
  neigbors = pd.read_parquet(
118
+ "hf://datasets/redcourage/Bloomberg-Financial-News-embedding-gemma-300m/neigbors.parquet"
119
  )
120
 
121
  # Convert to numpy arrays