The best way will be for both iOS and Android to show a panoramic views from designated roads throughout its coverage area by use Google Map Street View SDK, that explore places around the world through 360-degree, street-level imagery. World landmarks, navigate a trip, or show the outside of business can be explore using this API.
Each Street View panorama is an image, or set of images, that provides a full 360-degree view from a single location. So after getting street images from map, there need to use the Google Cloud Vision API to locate furniture. As this Google Cloud Vision API can detect and extract information about entities within an image, across a broad group of categories. Like Labels can identify objects, locations, activities, products, and more.
For example, there is a street image[zebra crossing, two tree, one traffic point, two packets] may return the following list of labels:
crossing - 0.848[score]
tree - 0.825[score]
traffic point - 0.835[score]
packet - 0.811[score]
So, this way the mark/detect unneeded things from street images will be happen and annotate those pins on map view, so the user can explore places and see the detail[like above] with image.
Does the user has to be enter/choose their location manually to show the located items on Map? or the current location of device will be fetch and then extract this into co-ordinate to show a panoramic view move near to that co-ordinate? or, need to keep the both? . Please confirm.