Prosecution Insights
Last updated: April 19, 2026
Application No. 18/309,822

IDENTIFYING OBJECTS IN AN IMAGE USING ULTRA SHORT RANGE WIRELESS COMMUNICATION DATA

Final Rejection §103
Filed
Apr 30, 2023
Examiner
COLEMAN, STEPHEN P
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Motorola Mobility LLC
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
737 granted / 877 resolved
+22.0% vs TC avg
Moderate +12% lift
Without
With
+11.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
47 currently pending
Career history
924
Total Applications
across all art units

Statute-Specific Performance

§101
12.5%
-27.5% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
6.8%
-33.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 877 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION RESPONSE TO ARGUMENTS The examiner acknowledges the amendment of claims 1, 5, 10 & 19 and the cancellation of claims 2, 6, 11 & 15 filed 12/26/2025. Applicants’ arguments filed on (12/26/2025) have been fully considered but are deemed moot in view of new grounds of rejection. Due to the variation in claim scope via amendments a new ground of rejection is proper. CLAIM REJECTIONS - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 10, 12, 19 & 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Hollar et al. (U.S. Publication 2021/0304577) in view of Rothschild et al. (U.S. Publication 2018/0285387) As to claims 1, 10 & 19, Hollar discloses a short range wireless communication transceiver; at least one camera (108, Figs. 1A-1C & [0059] discloses one or more camera sensors 108.); a display (601, Fig. 6 & [0082] discloses a smart device with a smart sensor 602 viewing an area and capturing an image.); a memory (1736, Figs. 17 & [0144] discloses a memory) having stored thereon an image identification module (IIM) for identifying objects in an image; and at least one processor (1738, Figs. 17 & [0144] discloses a processor) communicatively coupled to the short range wireless communication transceiver, the at least one camera, the display (601, Fig. 6 & [0082] discloses a smart device with a smart sensor 602 viewing an area and capturing an image.), and the memory (1736, Figs. 17 & [0144] discloses a memory), the at least one processor (1738, Figs. 17 & [0144] discloses a processor) executing program code of the IIM, which enables the electronic device to: capture a first image within a field of view of a first camera among the at least one camera ([0074] & Fig. 4 discloses video footage of a person 409 may be initially captured from the camera sensor from the location device 404.); receive, via the short range wireless communication transceiver, first object data from a second electronic device located within the field of view (UWB tags 606/504/512 in cameras FOV; UWB unit 104 receives/transmits…measuring time of flight (TOF)..angle of arrival (AOA); relative location of the tag 606 can be determined and overlaid on the image. Figs. 6/10A-C & [0064 & 0084]); determine, based on the first object data, if the first image contains a first tagged object within the field of view of the first camera; ([0074-0076], [0102], Figs. 5, 10B-10C discloses the system can then associate the closest person object in the image to the UWB tag; apply object recognition and classification algorithms to identify) in response to determining that the first image contains the first tagged object within the field of view, map, based on the first object data, the first tagged object to a first location within the first image; (Fig. 6 & [0074, 0084]) generate first meta-data associated with the first image, the first meta-data containing at least the first location of the first tagged object within the first image; ([0074, 0084] & Fig. 6 discloses generating data that includes the pixel/location of the tag/object in the camera image and records attributes; that location data is image.) and store the first image with the first meta-data to the memory (Fig. 17 & [0144-0145] discloses storing processed data in memory as part of the system’s data processing flow: manipulate data… and store the data…in memory 1736…processor 1738). Hollar is silent to identify, based on the first object data, a first tagged object identifier associated with the first tagged object: modify the first meta-data associated with the first image to generate modified first meta-data comprising the first tagged object identifier and store the first image with the modified first meta-data to the memory. However, Rothschild discloses identify, based on the first object data, a first tagged object identifier associated with the first tagged object (308, Fig. 3 & [0039] discloses identifying the identities of individuals present in the captured image at step 310): modify the first meta-data associated with the first image to generate modified first meta-data comprising the first tagged object identifier (314, Fig. 3 & [0042] discloses after successful image matching, “the first device 102 may tag the individuals using the identities and personal details” and then “store the tagging information in metadata of the captured image.) and store the first image with the modified first meta-data to the memory ([0042] discloses tagging information may be stored in metadata of the captured image and may be stored in the memory 112 of the first device. [0033] discloses wherein tagging information is stored in metadata of the image, present in the memory 112.). It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Hollar’s disclosure to include the above limitations in order to persist the already paired tag/object identity inside the captured images metadata for later retrieval, recognition, display and downstream use without repeating the pairing step. As to claims 3 & 12, Hollar in view of Rothschild discloses everything as disclosed in claims 1 & 10 respectively. In addition, Hollar discloses receives a plurality of object data for a first time period from the second electronic device, the plurality of object data each including an object data time stamp specifying when each of the plurality of object data was generated; stores the plurality of object data for the first time period to the memory; identifies a first time stamp associated with the first image; determines if one stored object data time stamp substantially matches the first time stamp; and in response to determining that one stored object data time stamp substantially matches the first time stamp, retrieves as the first object data, a specific one of the plurality of object data associated with the one stored object data time stamp. (See Figs. 8-9 and Corresponding Disclosure) As to claim 21, Hollar in view of Rothschild discloses everything as disclosed in claim 1 respectively. In addition, Hollar discloses wherein the short-range wireless communication transceiver is an ultra wideband wireless communication transceiver (UWB tags 606/504/512 in cameras FOV; UWB unit 104 receives/transmits…measuring time of flight (TOF)..angle of arrival (AOA); relative location of the tag 606 can be determined and overlaid on the image. Figs. 6/10A-C & [0064 & 0084]);. As to claim 22, Hollar in view of Rothschild discloses everything as disclosed in claim 1 respectively. In addition, Hollar discloses wherein the short range wireless communication transceiver is configured to provide both location determination and position tracking (UWB tags 606/504/512 in cameras FOV; UWB unit 104 receives/transmits…measuring time of flight (TOF)..angle of arrival (AOA); relative location of the tag 606 can be determined and overlaid on the image. Figs. 6/10A-C & [0064 & 0084]). CLAIM REJECTIONS - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4, 13 & 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hollar et al. (U.S. Publication 2021/0304577) in view of Rothschild et al. (U.S. Publication 2018/0285387) as applied in claims 1, 10 & 19 respectively above further in view of Rouady et al. (U.S. Publication 2016/0366464) As to claims 4, 13 & 20, Hollar in view of Rothschild discloses everything as disclosed in claims 1, 10 & 19 respectively but is silent to wherein the at least one processor: determines if the first image has been selected for viewing; in response to determining that the first image has been selected for viewing, retrieves the first image and the first meta-data corresponding to the first image; determines if the first meta-data contains a first tagged object identifier and a first location of the first tagged object within the first image; and in response to determining that the first meta-data contains the first tagged object identifier and the first location of the first tagged object within the first image, marks the first image with the first tagged object identifier at the first location. However, Rouady’s Fig. 8 & [0071-0073] discloses determines if the first image has been selected for viewing; in response to determining that the first image has been selected for viewing, retrieves the first image and the first meta-data corresponding to the first image; determines if the first meta-data contains a first tagged object identifier and a first location of the first tagged object within the first image; and in response to determining that the first meta-data contains the first tagged object identifier and the first location of the first tagged object within the first image, marks the first image with the first tagged object identifier at the first location. It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Hollar in view of Rothschild’s disclosure to include the above limitations in order to reduce UI errors. Claims 5, 7-8, 14 & 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Hollar et al. (U.S. Publication 2021/0304577) in view of Rothschild et al. (U.S. Publication 2018/0285387) as applied in claims 1, 10 & 19 respectively above further in view of Newman et al. (U.S. Publication 2019/0362464) As to claims 5 & 14, Hollar in view of Rothschild discloses everything as disclosed in claims 1 & 10 respectively but is silent to detects selection of a first post capture processing parameter of the first image for modification; generates a second post capture processing parameter of the first image at least partially based on the first meta-data; and renders the first image based on the generated second post capture processing parameter; wherein the second post capture processing parameter comprises at least one of: a zoom level; a focal distance; a cropping area; an image tag; an image effect; and removal of an unwanted object. However, Newman’s Fig. 4-6 & Corresponding Disclosures disclose detects selection of a first post capture processing parameter of the first image for modification; generates a second post capture processing parameter of the first image at least partially based on the first meta-data; and renders the first image based on the generated second post capture processing parameter; wherein the second post capture processing parameter comprises at least one of: a zoom level; a focal distance; a cropping area; an image tag; an image effect; and removal of an unwanted object. It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Hollar in view of Rothschild’s disclosure to include the above limitations in order to reduce manual retouching by deriving zoom/crop/effect parameters. As to claims 7 & 16, Hollar in view of Rothschild discloses everything as disclosed in claims 1 & 10 respectively but is silent to retrieves at least one of a first zoom level and a first focal distance of the first image from the first meta-data; generates at least one of a second zoom level and a second focal distance of the first image based on the first location of the first tagged object; and adjusts at least one of the first zoom level and the first focal distance of the first image to the second zoom level and the second focal distance to focus on the first tagged object at the first location. However, Newman’s Fig. 4-6 & Corresponding Disclosures disclose one of a first zoom level and a first focal distance of the first image from the first meta-data; generates at least one of a second zoom level and a second focal distance of the first image based on the first location of the first tagged object; and adjusts at least one of the first zoom level and the first focal distance of the first image to the second zoom level and the second focal distance to focus on the first tagged object at the first location. It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Hollar in view of Rothschild’s disclosure to include the above limitations in order to reduce manual refocusing and yields consistent subject reframing. As to claims 8 & 17, Hollar in view of Rothschild discloses everything as disclosed in claims 1 & 10 respectively but is silent to detects selection of an image effect for the first image; generates at least one post capture processing parameter for the selected image effect, the generated at least one post capture processing parameter at least partially based on the selected image effect applied to the first tagged object identified by the first meta-data; renders the first image at least partially based on the generated at least one post capture processing parameter; and displays the rendered first image on the display. However, Newman’s Fig. 4-6 & Corresponding Disclosures disclose detects selection of an image effect for the first image; generates at least one post capture processing parameter for the selected image effect, the generated at least one post capture processing parameter at least partially based on the selected image effect applied to the first tagged object identified by the first meta-data; renders the first image at least partially based on the generated at least one post capture processing parameter; and displays the rendered first image on the display. It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Hollar in view of Rothschild’s disclosure to include the above limitations in order to improve quality and efficiency and reducing user steps. Claims 9 & 18 are rejected under 35 U.S.C. 103 as being unpatentable over Hollar et al. (U.S. Publication 2021/0304577) in view of Rothschild et al. (U.S. Publication 2018/0285387) as applied in claims 1 & 10 respectively above further in view of NA et al. (U.S. Publication 2015/0022698) As to claims 9 & 18, Hollar in view of Rothschild discloses everything as disclosed in claims 1 & 10 respectively but is silent to wherein the at least one processor: displays the first image with a first tagged object identifier at the first location; detects selection of the first tagged object for deletion in the first image; removes the first tagged object from the first image; renders the first image with the first tagged object removed; and displays the rendered first image on the display. However, NA’s Abstract, Figs. 4-5 discloses wherein the at least one processor: displays the first image with a first tagged object identifier at the first location; detects selection of the first tagged object for deletion in the first image; removes the first tagged object from the first image; renders the first image with the first tagged object removed; and displays the rendered first image on the display. It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Hollar in view of Rothschild’s disclosure to include the above limitations in order to reduce selection error and processing cost. CONCLUSION Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Stephen P Coleman whose telephone number is (571)270-5931. The examiner can normally be reached Monday-Thursday 8AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Stephen P. Coleman Primary Examiner Art Unit 2675 /STEPHEN P COLEMAN/Primary Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Apr 30, 2023
Application Filed
Sep 10, 2025
Request for Continued Examination
Sep 12, 2025
Response after Non-Final Action
Sep 24, 2025
Non-Final Rejection — §103
Dec 26, 2025
Response Filed
Jan 14, 2026
Applicant Interview (Telephonic)
Jan 15, 2026
Examiner Interview Summary
Mar 10, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601591
DISTANCE MEASURING DEVICE, DISTANCE MEASURING METHOD, PROGRAM, ELECTRONIC APPARATUS, LEARNING MODEL GENERATING METHOD, MANUFACTURING METHOD, AND DEPTH MAP GENERATING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602429
Video and Audio Multimodal Searching System
2y 5m to grant Granted Apr 14, 2026
Patent 12597146
INFORMATION PROCESSING APPARATUS AND CONTROL METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12591961
MONITORING DEVICE AND MONITORING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586237
DEVICE, COMPUTER PROGRAM AND METHOD
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
96%
With Interview (+11.6%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 877 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month