I would like to find the first element which is greater than a limit from an ordered collection. While iteration over it is always an option, I need a faster one. Currently, I came up with a solution like this but it feels a little hacky:

use std::cmp::Ordering; use std::collections::BTreeMap; use std::ops::Bound::{Included, Unbounded}; #[derive(Debug)] struct FloatWrapper(f32); impl Eq for FloatWrapper {} impl PartialEq for FloatWrapper { fn eq(&self, other: &Self) -> bool { (self.0 - other.0).abs() < 1.17549435e-36f32 } } impl Ord for FloatWrapper { fn cmp(&self, other: &Self) -> Ordering { if (self.0 - other.0).abs() < 1.17549435e-36f32 { Ordering::Equal } else if self.0 - other.0 > 0.0 { Ordering::Greater } else if self.0 - other.0 < 0.0 { Ordering::Less } else { Ordering::Equal } } } impl PartialOrd for FloatWrapper { fn partial_cmp(&self, other: &Self) -> Option<Ordering> { Some(self.cmp(other)) } }

The wrapper around the float is not nice even that I am sure that there will be no NaNs

The Range is also unnecessary since I want a single element.

Is there a better way of achieving a similar result using only Rust's standard library? I know that there are plenty of tree implementations but it feels like overkill.

After the suggestions in the answer to use the iterator I did a little benchmark with the following code:

fn main() { let measure = vec![ 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, ]; let mut measured_binary = Vec::new(); let mut measured_iter = Vec::new(); let mut measured_vec = Vec::new(); for size in measure { let mut ww = BTreeMap::new(); let mut what_found = Vec::new(); for _ in 0..size { let now: f32 = thread_rng().gen_range(0.0, 1.0); ww.insert(FloatWrapper(now), now); } let what_to_search: Vec<FloatWrapper> = (0..10000) .map(|_| thread_rng().gen_range(0.0, 0.8)) .map(|x| FloatWrapper(x)) .collect(); let mut rez = 0; for current in &what_to_search { let now = Instant::now(); let m = find_one(&ww, current); rez += now.elapsed().as_nanos(); what_found.push(m); } measured_binary.push(rez); rez = 0; for current in &what_to_search { let now = Instant::now(); let m = find_two(&ww, current); rez += now.elapsed().as_nanos(); what_found.push(m); } measured_iter.push(rez); let ww_in_vec: Vec<(FloatWrapper, f32)> = ww.iter().map(|(&key, &value)| (key, value)).collect(); rez = 0; for current in &what_to_search { let now = Instant::now(); let m = find_three(&ww_in_vec, current); rez += now.elapsed().as_nanos(); what_found.push(m); } measured_vec.push(rez); println!("{:?}", what_found); } println!("binary :{:?}", measured_binary); println!("iter_map :{:?}", measured_iter); println!("iter_vec :{:?}", measured_vec); } fn find_one(from_what: &BTreeMap<FloatWrapper, f32>, what: &FloatWrapper) -> f32 { let v: Vec<f32> = from_what .range((Included(what), (Unbounded))) .take(1) .map(|(_, &v)| v) .collect(); *v.get(0).expect("we are in truble") } fn find_two(from_what: &BTreeMap<FloatWrapper, f32>, what: &FloatWrapper) -> f32 { from_what .iter() .skip_while(|(i, _)| *i < what) // Skipping all elements before it .take(1) // Reducing the iterator to 1 element .map(|(_, &v)| v) // Getting its value, dereferenced .next() .expect("we are in truble") // Our } fn find_three(from_what: &Vec<(FloatWrapper, f32)>, what: &FloatWrapper) -> f32 { *from_what .iter() .skip_while(|(i, _)| i < what) // Skipping all elements before it .take(1) // Reducing the iterator to 1 element .map(|(_, v)| v) // Getting its value, dereferenced .next() .expect("we are in truble") // Our }

The key takeaway for me is that it is worth to use the binary search after ~50 elements. In my case with 30000 elements means 200x speedup (at least based on this microbenchmark).