Also: Just how big is this new generative AI? Think internet-level disruption Image segmentation simply refers to an AI model that can identify different items in a photo. For example, in a photo of a box of fruit, using image segmentation, an AI model would be able to identify each individual fruit photographed and the box, as seen by the Meta demo.  Meta’s Segment Anything project includes a new task, dataset, and model for image segmentation that aims to “democratize segmentation,” according to Meta.  Also: ChatGPT’s intelligence is zero, but it’s a revolution in usefulness, says AI expert Meta released both its general Segment Anything Model (SAM) and its largest ever segmentation dataset, Segment Anything 1-Billion mask dataset (SA-1B). The dataset has over 1 billion masks on 11 million licensed and privacy-respecting images.  Meta says it released (SAM) and (SA-1B) “to enable a wide range of applications and promote further research into foundation models for computer vision.” Image segmentation can be used for photo editing, scientific imagery analysis, larger AI systems that require a general multimodal understanding of the world, and most interestingly, AR and VR.  Also: How to use ChatGPT to create an app Previous segmentation models either required an individual to guide it through interactive segmentation or training based on substantial amounts of manually annotated objects for automatic segmentation.  SAM is a single model that can easily perform either segmentation method. It means practitioners no longer have to collect their own segmentation data and also eliminates the need to fine-tune a model for their use case, saving both time and effort.  Also: ChatGPT is not innovative or revolutionary, says Meta’s chief AI scientist    You can test out the technology by visiting the Segment Anything demo site and uploading your own image or utilizing a photo in the gallery.