Prosecution Insights
Last updated: April 19, 2026
Application No. 18/624,922

TAG IDENTIFICATION

Non-Final OA §102§103§112
Filed
Apr 02, 2024
Examiner
BALI, VIKKRAM
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Opsec Security Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
510 granted / 626 resolved
+19.5% vs TC avg
Moderate +11% lift
Without
With
+11.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
34 currently pending
Career history
660
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
51.2%
+11.2% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
18.9%
-21.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 626 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 is a system claim with a recitation of “a processor configured to” this makes the claim unclear how a system/apparatus exists in all of these states at a point in time is it’s just processor configuration without storage of the software in a memory. Dependent claims 2-25 are rejected as well, as they depend on rejected independent claim 1. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-15, 17-21 and 24-27 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Talbot et al (US Pub. 2020/0219043). With respect to claim 1, Talbot discloses A system comprising: an imaging sensor, wherein the imaging sensor acquires one or more images of one or more tags from light reflected from the one or more tags on a tagged item, (see figure 1A, 101-1, 101-2 and paragraph 0070, wherein …users may use the device(s) 101 …to capture images…; paragraph 0119, wherein …image may also include …items such as price tags, promotional placards, advertisements, and the like); and a processor (see figure 1A, numerical 102) configured to: receive the one or more images, (see paragraph 0010, wherein …receiving, at a server and via a mobile device, a digital image of an array of products, and determining, in the digital image…; see figure 1B, images from 101 to the workflow manager 122); receive a library of tag types, (see figure 1B, Product lookup module 120 dataflow to the workflow manager 122; see paragraph 0084, wherein …in order to provide more useful information, the product lookup module 120 may store product information in association with product identifiers…); determine, using the one or more images, a set of feature metrics, wherein determining the set of feature metrics uses a machine learning algorithm and is based at least in part on one or more of an image processing, manipulation, or correction; determine, using the set of feature metrics and the library of tag types, a tag type of the one or more tags in the one or more images, wherein determining the tag type is based at least in part on one or more of: a local maxima determination, a bounding box generation, a tag candidate patch extraction, a tag candidate segmentation, a tag candidate feature metric determination, and a comparison to a model, (see paragraph 0010, wherein … determining a candidate product identifier “determining the tag type”…; see paragraph 0192, wherein …operation 606, features are extracted from the image and/or from the individual segments of the image. At operation 608, the features are analyzed with an image analysis engine. The image analysis engine may be or include a machine learning model “determining the set of feature metrics uses a machine learning algorithm” trained…; see figure 2O, wherein numerical 291 shows the bounding box around the individual segments); determine a confidence level of the tag type; and in response to the confidence level being above a threshold level, provide the tag type determined, (see paragraph 0010, wherein …method may further include, if the confidence value satisfies a condition, associating the candidate product identifier with the segment and sending candidate product information, based on the candidate product identifier, to the mobile device for display in association with the segment…), as claimed. With respect to claim 2, Talbot further discloses wherein the set of feature metrics is determined for each of the one or more images, (see paragraph 0158, wherein …image analysis engine 110 may be configured to identify multipacks in images, identify the products in the multipacks, and determine how the multipacks are oriented on the display. More particularly, the image segmentation module 114 may determine segments of images…), as claimed. With respect to claim 3, Talbot further discloses wherein the set of feature metrics is determined for each of the one or more tags in the one or more images, (see paragraph 0192, wherein …operation 606, features are extracted from the image and/or from the individual segments of the image. At operation 608, the features are analyzed with an image analysis engine…), as claimed. With respect to claim 4, Talbot further discloses wherein a tag of the one or more tags comprises a microtag, a taggant, a chemical marker, a physical marker, a rugate filter, an interference filter, a pigment, a flake, a platelet, or a granule, (see paragraph 0222, wherein …. In particular, after the image is captured, it may be analyzed to identify regions that may correspond to text “a physical marker” of interest… for example, based on colors, a determination that an area includes some text (though the text may not be analyzed), a size of the features that are determined to be text…), as claimed. With respect to claim 5, Talbot further discloses wherein the tag comprises one or more, or one or more combinations of: silicon, silicon dioxide, potassium aluminum silicate, mica, titanium dioxide, pigmented or dyed metallic and metallicized substrates, polymeric materials, a combination of high and low refractive index thin films, or any other material whose properties are differentiated from a bulk media in which the tag is embedded for a purpose of identification, (see paragraph 0253, wherein …The touch sensors 1603 may include any suitable components for detecting touch-based inputs and generating signals or data that are able to be accessed using processor instructions, including electrodes (e.g., electrode layers), physical components (e.g., substrates, spacing layers, structural supports, compressible elements, etc.) processors, circuitry, firmware, and the like…), as claimed. With respect to claim 6, Talbot further discloses wherein the tagged item comprises a drug product, a food product, a tablet, a capsule, a label, a container, a seed, a consumer product (or any part thereof), an electronic material (or any part thereof), an industrial product (or any part thereof), or a package, (see figure 2K, Energy drink), as claimed. With respect to claims 7 and 8, Talbot further discloses wherein the processor is also configured to determine one or more additional identifying features of the tagged item; and wherein an additional identifying feature of the one or more additional identifying features comprises one or more of a quick response code, a barcode, a two-dimensional matrix, a data matrix, a logo, a serial number, an item shape, a luminosity, a color, a mark, an indicium, or a randomly serialized marker, (see paragraph 0146, wherein … In some cases a user may be able to manually enter product information “additional identifying features”, take a new image of the product (e.g., after removing the product from the display case), scan a barcode of the product, manually enter a universal product code number, or the like…), as claimed. With respect to claim 9, Talbot further discloses wherein the processor is also configured to determine an identity of the tagged item based at least in part on one or more of an additional identifying feature of the tagged item, (see paragraph 0222, wherein …the actual contents of the text may not be analyzed. Instead, areas of interest may be identified, for example, based on colors, a determination that an area includes some text (though the text may not be analyzed), a size of the features that are determined to be text, or the like…), as claimed. With respect to claim 10, Talbot further discloses wherein the set of feature metrics comprises one or more tag characteristics deemed significant for tag type determination, and/or an associated statistical threshold for each have been established as indicating significance, are used to generate a set of feature metrics for each tag type, (see paragraph 0234, wherein …Metrics relating to the contents of a whole menu (and/or multiple menus) may also be compiled and made available for review. For example, the server 102 may determine how many drinks in a given menu (or group of menus) include spirits that are supplied…), as claimed. With respect to claim 11, Talbot further discloses wherein the set of feature metrics comprise one or more of a size, a shape, a color, a saturation, or intensity, (see paragraph 0084, wherein …the image analysis engine 110 may use the product lookup module 120 to associate relevant product information (e.g., a beverage brand, type, size, etc.) with a segment…), as claimed. With respect to claim 12, Talbot further discloses wherein the color, the saturation, or the intensity comprise any of an absolute value, a standard deviation, or a relative value, (see paragraph 0210, wherein …compliance metric may be based on a mathematical model that produces a numerical representation of a deviation between a given display and the target planogram…), as claimed. With respect to claim 13, Talbot further discloses wherein the color is a result of a tag’s inherent chemical or physical material properties or is a result of one or more coatings on a tag surface, (see paragraph 0254, wherein …The force sensors 1605 may include any suitable components for detecting force-based inputs and generating signals or data that are able to be accessed using processor instructions, including electrodes (e.g., electrode layers), physical components (e.g., substrates, spacing layers, structural supports, compressible elements “physical material properties”, etc.) processors, circuitry, firmware…), as claimed. With respect to claim 14, Talbot further discloses wherein the set of feature metrics are automatically determined, (see paragraph 0065, wherein …the automated image analysis operation may include multiple steps or operations to determine which areas of the image depict products and to determine what the products are…), as claimed. With respect to claim 15, Talbot further discloses wherein the set of feature metrics are manually determined by a human user, (see paragraph 0146, wherein …a user selects one of the product information selection buttons 270…), as claimed. With respect to claim 17, Talbot further discloses wherein a new tag type is added to the library of tag types after training the system to differentiate the new tag type from known tag types in the library of tag types, (see paragraph 0074, wherein …Compliance targets may include, for example, data about how many products they want displayed at particular sales locations 106, what types of products they want displayed, where they want products displayed, or the like. The supplier server 108 may also receive compliance metrics, analytic results or other similar types of results or performance indicia from the remote server(s) 102 and/or the mobile devices 101), as claimed. With respect to claim 18, Talbot further discloses wherein the tag type corresponds to a pre-defined set of feature metric values, (see paragraph 0099, wherein …Image 138 represents an example image that includes a subset of segments whose confidence metrics satisfy the confidence condition (shown in dotted boxes), as well as a subset of segments whose confidence metrics fail to satisfy the confidence condition (e.g., segments 139, 141 shown in solid boxes)…), as claimed. With respect to claim 19, Talbot further discloses wherein the pre-defined set of feature metric values corresponding to a known tag type is modified, if necessary, as the library of tag types grows, (see paragraph 0091, wherein …Also, because product data is stored and accessed centrally (e.g., by the remote server), the system is highly scalable, as updates to product databases, UPC codes, and the like, can be applied to the central system, rather than being sent to and stored on the multitudes of mobile devices that may be used in the instant system), as claimed. With respect to claim 20, Talbot further discloses wherein the one or more images are acquired using a mobile device, (see paragraph 0009, wherein …The method may further include, at the mobile device…), as claimed. With respect to claim 21, Talbot further discloses wherein the mobile device comprises a smartphone, a microscope, or a tablet, (see paragraph 0068, wherein …The process of obtaining images of products, associating the images with a particular location (e.g., a retail store), sending the images for analysis, receiving an annotated image, receiving compliance scores (or other data) and action items, and performing real-time updates and corrections to the annotated image may all be facilitated by an application that may be executed on a portable computing device, such as a mobile phone, tablet computer, laptop computer, personal digital assistant, or the like…), as claimed. With respect to claim 24, Talbot further discloses wherein the image sensor comprises a solid-state sensor, a CMOS sensor, a CCD sensor, a staring array, an RGB sensor, an IR sensor, an RGB and IR sensor, a Bayer pattern color sensor, a multiple band sensor, or a monochrome sensor, (see paragraph 0221, wherein …. FIG. 12 illustrates an example interface for capturing an image of a menu with a mobile device 1200 having an integrated camera. The device 1200 (which may be a mobile phone “a CMOS sensor”, tablet computer, digital camera, or the like) may present alignment guides 1204 on a display 1202…), as claimed. With respect to claim 25, Talbot further discloses wherein determining the tag type uses one or more machine learning algorithms comprising: a support vector machine, neural network model, a bounding box model, a clustering algorithm, and/or a classifier algorithm, (see paragraph 0218, wherein …the machine learning model(s) of the compliance metric engine may be based on artificial neural networks, support vector machines, Bayesian networks, genetic algorithms, or the like, and may be implemented using any suitable software, including but not limited to Google Prediction API, NeuroSolutions, TensorFlow, Apache Mahout, PyTorch, or Deeplearning4j), as claimed. Claims 26 and 27 are rejected for the same reasons as set forth in the rejections for claim 1, because claims 26 and 27 are claiming subject matter of similar scope as claimed in claim 1. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 16, 22 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Talbot et al (US Pub. 2020/0219043) in view of Xu et al (US Pub. 2019/0266418). With respect to claim 16, Talbot discloses all the limitations as claimed and rejected in claim 1 above. However, Talbot fails to explicitly disclose wherein one or more ground truth images of one or more known tag types are used to train the system, as claimed. Xu teaches one or more ground truth images of one or more known tag types are used to train the system, (see paragraph 0041, wherein …DNN may be trained with labeled images …The loss function(s) may be used to measure error in the predictions of the DNN using one or more ground truth masks…), as claimed. It would have been obvious to one ordinary skilled in the art at the effective date of invention to combine the references as they are analogous because they are solving similar problem of tagging using an image analysis. The teaching of Xu training a machine learning model using ground truth images can be incorporated into Talbot’s system as suggested (see paragraph 0222, wherein …Machine learning models may be trained using images…), for suggestion, and modifying the system yields trained machine learning model for identifying the objects, for motivation. With respect to claim 22 for the same reasons of combination of Talbot and Xu further discloses wherein an image segmentation comprises a delineation of pixels belonging to the tag types in the one or more images, (see Xu paragraph 0058, wherein …the segmentation mask(s) 110 may include points (e.g., pixels) in the image …In some examples, the segmentation mask(s) 110 generated may include one or more binary masks (e.g., binary mask head 334 of FIG. 3C) with a first representation for background elements …and a second representation for foreground elements…), as claimed. With respect to claim 23, combination of Talbot and Xu further discloses wherein the delineation of the pixels comprises determining a foreground and a background, and wherein the foreground and the background are used to generate a binary segmentation mask, (see Xu paragraph 0058, wherein …The binary mask may be output by the machine learning model(s) 108 as pixel values of 0 or 1 (for black or white), may include other pixel values, or may include a range of values that are interpreted as 0 or 1 (e.g., 0 to 0.49 is interpreted as 0, and 0.5 to 1 is interpreted as 1)…), as claimed. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIKKRAM BALI whose telephone number is (571)272-7415. The examiner can normally be reached Monday-Friday 7:00AM-3:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VIKKRAM BALI/Primary Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Apr 02, 2024
Application Filed
Mar 18, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602810
TIRE-SIZE IDENTIFICATION METHOD, TIRE-SIZE IDENTIFICATION SYSTEM AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12586208
APPARATUS AND METHOD FOR OPERATING A DENTAL APPLIANCE
2y 5m to grant Granted Mar 24, 2026
Patent 12567248
A CROP SCANNING SYSTEM, PARTS THEREOF, AND ASSOCIATED METHODS
2y 5m to grant Granted Mar 03, 2026
Patent 12561937
METHOD, COMPUTER PROGRAM, PROFILE IDENTIFICATION DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12537917
ADAPTATION OF THE RADIO CONNECTION BETWEEN A MOBILE DEVICE AND A BASE STATION
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+11.3%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 626 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month