Prosecution Insights
Last updated: April 19, 2026
Application No. 18/634,791

SYSTEM AND METHOD OF GENERATING AUTHENTICATION ASSET USING RADIO FREQUENCY IDENTIFICATION AND EVENT DATA

Final Rejection §103
Filed
Apr 12, 2024
Examiner
PENA-SANTANA, TANIA M
Art Unit
2443
Tech Center
2400 — Computer Networks
Assignee
Genuine Inc.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
66%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
176 granted / 245 resolved
+13.8% vs TC avg
Minimal -6% lift
Without
With
+-6.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
29 currently pending
Career history
274
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
17.6%
-22.4% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 245 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims Status Claims 1, 3, 5, 6, 7, 10, 12-16, 18, and 19 filed 02/03/2026 have been amended. Claims 1-20 are pending and have been rejected. Response to Arguments Applicant’s arguments with respect to independent claims 1, 10 and 16 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, 7-13, 15-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Manchi et al. (U.S. Patent No. 12,056,561), hereinafter ‘Manchi’ in view of Thomson (U.S. Publication 2019/0361847), hereinafter ‘Thomson’. As to claim 1, Manchi discloses a method comprising: capturing one or more images of an object at an event (Manchi, see col. 3 lines 34-41, the images or video captured by the camera detecting movement or motion of an object. See col. 6 lines 6-27, event detection object); receiving radio frequency identification (RFID) data generated by a RFID reader at the event, wherein the RFID data is associated with a tag on the object at the event (Manchi, see col. 10 lines 5-10, the RFID detection event can include RFID sensors capturing RFID data from RFID tags of objects or equipment within detection range of the RFID sensors. See col. 15 lines 49-64, the RFID sensors (RFID readers or sensors) can capture RFID data from RFID tags within detection range of the RFID sensors); associating the RFID data with one or more of time data or location data representing the event (Manchi, see col. 4 lines 1-12, the service provider computers can utilize the time information and/or location information from the cameras and RFID sensors to determine that an object has entered an area and to associate the data point events together); and Manchi is silent to generating an authentication asset having combined data including the RFID data and the one or more of time data or location data, wherein the authentication asset is configured such that, when displayed, the combined data is visually superimposed over the one or more images. However, Thomson discloses generating an authentication asset having combined data including the RFID data and the one or more of time data or location data, wherein the authentication asset is configured such that, when displayed, the combined data is visually superimposed over the one or more images (Thomson, see [0045-0046], data can then be accessed by system in order to provide links to real-time data and events in accordance with the transparent layer and link generating process. See [0056], superimposing over one of the plurality of objects relating to the one or more documents that the icon links to. See [0068], If one of the sensor icons is selected by the user, the display presents data measured or obtained by the sensor which the user-selected sensor icon links to). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Manchi in view of Thomson in order to further modify the method for an object detection feature from the teachings of Manchi with the method for spatial linking navigation from the teachings of Thomson. One of ordinary skill in the art would have been motivated because it would allow to render entire content while providing both the overlay generator and link generator (Thomson – 0056). As to claim 2, Manchi in view of Thomson discloses everything disclosed in claim 1. Manchi further discloses wherein associating the one or more of time data or location data includes generating the time data and the location data at the event, and wherein the authentication asset includes the time data and the location data (Manchi, see col. 5 lines 12-32, RFID sensor and camera are located near the door for capturing data of objects entering or exiting door, wherein the camera can capture live stream camera footage and obtain a timestamp (time) and location data). As to claim 3, Manchi in view of Thomson discloses everything disclosed in claim 1. Manchi further discloses capturing the one or more images of the object being used at the event (Manchi, see col. 5 lines 12-32, the camera can capture live stream camera the camera footage (video or images), time, and location data in order to transmit to the service provider). As to claim 4, Manchi in view of Thomson discloses everything disclosed in claim 3. Manchi further discloses wherein capturing the one or more images includes capturing video including the one or more images of a participant using the object at the event, and further comprising associating the RFID data with participant data representing the participant (Manchi, see col. 7 lines 19-67, the cameras are configured to provide video or images of the user entering the entryway and/or the area as well as information about the video or images, wherein the information about the video or images captured by cameras includes a timestamp and location of the footage). As to claim 5, Manchi in view of Thomson discloses everything disclosed in claim 4. Manchi further discloses displaying the authentication asset, wherein the displayed authentication asset includes the RFID data, one or more of the time data or the location data, and the participant data superimposed on the one or more images (Manchi, see col. 7 lines 19-67, footage capture by the cameras include a user entering the area, RFID data, timestamp and location). As to claim 7, Manchi in view of Thomson discloses everything disclosed in claim 1. Manchi further discloses determining whether event equipment at the event is transmitting RF signals (Manchi, see col. 2 lines 1-12, service provider computers implementing the object detection feature can correlate radio frequency identification (RFID) data received from an RFID tag associated with the object); and capturing the RFID data in response to determining that the event equipment is not transmitting RF signals (Manchi, see col. 2 lines 1-67, cameras can capture RFID data, wherein the camera is used to detect the presence of an object, the type of the object, and that the object is exiting the area (exit event)). As to claim 8, Manchi in view of Thomson discloses everything disclosed in claim 1. Manchi further discloses wherein the RFID reader is fixed to a venue of the event (Manchi, see col. 13 lines 8-16, RFID sensor located within the volume of space of the entryway of the area). As to claim 9, Manchi in view of Thomson discloses everything disclosed in claim 8. Manchi further discloses wherein the RFID reader is fixed to the venue at a location passed by the object during the event (Manchi, see col. 13 lines 8-16, RFID sensor located within the volume of space of the entryway of the area, wherein the RFID sensor can be configured to capture the RFID data from an RFID tag associated with the object). As to claim 10, Manchi discloses a non-transitory computer-readable medium storing instructions which, when executed by one or more processors of a radio frequency identification (RFID) authentication system, cause the RFID authentication system to perform a method comprising: capturing one or more images of an object at an event (Manchi, see col. 3 lines 34-41, the images or video captured by the camera detecting movement or motion of an object. See col. 6 lines 6-27, event detection object); capturing, by a radio frequency identification (RFID) reader, RFID data associated with a tag on the object at the event (Manchi, see col. 10 lines 5-10, the RFID detection event can include RFID sensors capturing RFID data from RFID tags of objects or equipment within detection range of the RFID sensors. See col. 15 lines 49-64, the RFID sensors (RFID readers or sensors) can capture RFID data from RFID tags within detection range of the RFID sensors); associating the RFID data with one or more of time data or location data representing the event (Manchi, see col. 4 lines 1-12, the service provider computers can utilize the time information and/or location information from the cameras and RFID sensors to determine that an object has entered an area and to associate the data point events together); and Manchi is silent to generating an authentication asset having combined data including the RFID data and the one or more of time data or location data, wherein the authentication asset is configured such that, when displayed, the combined data is visually superimposed over the one or more images. However, Thomson discloses generating an authentication asset having combined data including the RFID data and the one or more of time data or location data, wherein the authentication asset is configured such that, when displayed, the combined data is visually superimposed over the one or more images (Thomson, see [0045-0046], data can then be accessed by system in order to provide links to real-time data and events in accordance with the transparent layer and link generating process. See [0056], superimposing over one of the plurality of objects relating to the one or more documents that the icon links to. See [0068], If one of the sensor icons is selected by the user, the display presents data measured or obtained by the sensor which the user-selected sensor icon links to). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Manchi in view of Thomson in order to further modify the method for an object detection feature from the teachings of Manchi with the method for spatial linking navigation from the teachings of Thomson. One of ordinary skill in the art would have been motivated because it would allow to render entire content while providing both the overlay generator and link generator (Thomson – 0056). As to claim 11, Manchi in view of Thomson discloses everything disclosed in claim 10. Manchi further discloses wherein associating the one or more of time data or location data includes generating the time data and the location data at the event, and wherein the authentication asset includes the time data and the location data (Manchi, see col. 5 lines 12-32, RFID sensor and camera are located near the door for capturing data of objects entering or exiting door, wherein the camera can capture live stream camera footage and obtain a timestamp (time) and location data). As to claim 12, Manchi in view of Thomson discloses everything disclosed in claim 10. Manchi further discloses causing the RFID authentication system to perform the method comprising capturing one or more images of the object being used by a participant at the event, wherein the authentication asset includes the one or more images (Manchi, see col. 7 lines 19-67, the cameras are configured to provide video or images of the user entering the entryway and/or the area as well as information about the video or images, wherein the information about the video or images captured by cameras includes a timestamp and location of the footage). As to claim 13, Manchi in view of Thomson discloses everything disclosed in claim 12. Manchi further discloses causing the RFID authentication system to perform the method comprising displaying the authentication asset, wherein the displayed authentication asset includes the RFID data, one or more of the time data or the location data, and participant data representing the participant superimposed on the one or more images (Manchi, see col. 7 lines 19-67, footage capture by the cameras include a user entering the area, RFID data, timestamp and location). As to claim 15, Manchi in view of Thomson discloses everything disclosed in claim 10. Manchi further discloses causing the RFID authentication system to perform the method comprising: determine whether event equipment at the event is transmitting RF signals (Manchi, see col. 2 lines 1-12, service provider computers implementing the object detection feature can correlate radio frequency identification (RFID) data received from an RFID tag associated with the object); and capture the RFID data in response to determining that the event equipment is not transmitting RF signals (Manchi, see col. 2 lines 1-67, cameras can capture RFID data, wherein the camera is used to detect the presence of an object, the type of the object, and that the object is exiting the area (exit event)). As to claim 16, Manchi discloses a radio frequency identification (RFID) authentication system comprising: a camera to capture one or more images of an object at an event (Manchi, see col. 3 lines 34-41, the images or video captured by the camera detecting movement or motion of an object. See col. 6 lines 6-27, event detection object); a RFID reader to capture RFID data associated with a tag on an object at an event (Manchi, see col. 10 lines 5-10, the RFID detection event can include RFID sensors capturing RFID data from RFID tags of objects or equipment within detection range of the RFID sensors. See col. 15 lines 49-64, the RFID sensors (RFID readers or sensors) can capture RFID data from RFID tags within detection range of the RFID sensors); and one or more processors configured to associate the RFID data with one or more of time data or location data representing the event (Manchi, see fig. 7, processors. See col. 4 lines 1-12, the service provider computers can utilize the time information and/or location information from the cameras and RFID sensors to determine that an object has entered an area and to associate the data point events together), and Manchi is silent to generating an authentication asset having combined data including the RFID data and the one or more of time data or location data, wherein the authentication asset is configured such that, when displayed, the combined data is visually superimposed over the one or more images. However, Thomson discloses generating an authentication asset having combined data including the RFID data and the one or more of time data or location data, wherein the authentication asset is configured such that, when displayed, the combined data is visually superimposed over the one or more images (Thomson, see [0045-0046], data can then be accessed by system in order to provide links to real-time data and events in accordance with the transparent layer and link generating process. See [0056], superimposing over one of the plurality of objects relating to the one or more documents that the icon links to. See [0068], If one of the sensor icons is selected by the user, the display presents data measured or obtained by the sensor which the user-selected sensor icon links to). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Manchi in view of Thomson in order to further modify the method for an object detection feature from the teachings of Manchi with the method for spatial linking navigation from the teachings of Thomson. One of ordinary skill in the art would have been motivated because it would allow to render entire content while providing both the overlay generator and link generator (Thomson – 0056). As to claim 17, Manchi in view of Thomson discloses everything disclosed in claim 16. Manchi further discloses wherein associating the one or more of time data or location data includes generating time data and the location data, and wherein the authentication asset includes the time data and the location data (Manchi, see col. 5 lines 12-32, RFID sensor and camera are located near the door for capturing data of objects entering or exiting door, wherein the camera can capture live stream camera footage and obtain a timestamp (time) and location data). As to claim 18, Manchi in view of Thomson discloses everything disclosed in claim 16. Manchi further discloses wherein the camera captures the one or more images of the object being used at the event (Manchi, see col. 7 lines 19-67, the cameras are configured to provide video or images of the user entering the entryway and/or the area as well as information about the video or images, wherein the information about the video or images captured by cameras includes a timestamp and location of the footage). As to claim 20, Manchi in view of Thomson discloses everything disclosed in claim 16. Manchi further discloses wherein the one or more processors determine whether event equipment at the event is transmitting RF signals (Manchi, see col. 2 lines 1-12, service provider computers implementing the object detection feature can correlate radio frequency identification (RFID) data received from an RFID tag associated with the object) and cause the RFID reader to capture the RFID data in response to determining that the event equipment is not transmitting RF signals (Manchi, see col. 2 lines 1-67, cameras can capture RFID data, wherein the camera is used to detect the presence of an object, the type of the object, and that the object is exiting the area (exit event)). Claims 6, 14 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Manchi et al. (U.S. Patent No. 12,056,561), hereinafter ‘Manchi’ in view of Thomson (U.S. Publication 2019/0361847), hereinafter ‘Thomson’ and Faris et al. (U.S. Publication 2019/0081947), hereinafter ‘Faris’. As to claim 6, Manchi in view of Thomson discloses everything disclosed in claim 5, comprising filtering one or more of the RFID data or the participant data based on one or more of an object parameter or a participant parameter, wherein the displayed authentication asset includes only the RFID data or the participant data associated with the object parameter or the participant parameter. However, Faris discloses comprising filtering one or more of the RFID data or the participant data based on one or more of an object parameter or a participant parameter, wherein the displayed authentication asset includes only the RFID data or the participant data associated with the object parameter or the participant parameter (Faris, see [0100], a user can utilize the end-user computing device to scan a tag in a physical asset, such as an athletic jersey. See [0143], data can be obtained from image-capturing device, such as sensor, which is used to detect athletic parameters). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Manchi in view of Manchi and Faris in order to further modify the method for an object detection feature from the teachings of Manchi with the method for spatial linking navigation from the teachings of Thomson and the method for utilize an imaging device to capture input from an electronic tag of a physical asset from the teachings of Faris. One of ordinary skill in the art would have been motivated because it would allow to scan the electronic tag in order to obtain the unique identification number (Faris – 0100). As to claim 14, Manchi in view of Thomson discloses everything disclosed in claim 13, causing the RFID authentication system to perform the method comprising filtering one or more of the RFID data or the participant data based on one or more of an object parameter or a participant parameter, wherein the displayed authentication asset includes only the RFID data or the participant data associated with the object parameter or the participant parameter. However, Faris discloses causing the RFID authentication system to perform the method comprising filtering one or more of the RFID data or the participant data based on one or more of an object parameter or a participant parameter, wherein the displayed authentication asset includes only the RFID data or the participant data associated with the object parameter or the participant parameter (Faris, see [0100], a user can utilize the end-user computing device to scan a tag in a physical asset, such as an athletic jersey. See [0143], data can be obtained from image-capturing device, such as sensor, which is used to detect athletic parameters). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Manchi in view of Thomson and Faris in order to further modify the method for an object detection feature from the teachings of Manchi with the method for spatial linking navigation from the teachings of Thomson and the method for utilize an imaging device to capture input from an electronic tag of a physical asset from the teachings of Faris. One of ordinary skill in the art would have been motivated because it would allow to scan the electronic tag in order to obtain the unique identification number (Faris – 0100). As to claim 19, Manchi in view of Thomson discloses everything disclosed in claim 16, comprising filtering one or more of the RFID data or participant data describing a participant based on one or more of an object parameter or a participant parameter, wherein the participant used the object at the event, and wherein the displayed authentication asset includes only the RFID data or the participant data associated with the object parameter or the participant parameter. However, Faris discloses comprising filtering one or more of the RFID data or participant data describing a participant based on one or more of an object parameter or a participant parameter, wherein the participant used the object at the event, and wherein the displayed authentication asset includes only the RFID data or the participant data associated with the object parameter or the participant parameter (Faris, see [0100], a user can utilize the end-user computing device to scan a tag in a physical asset, such as an athletic jersey. See [0143], data can be obtained from image-capturing device, such as sensor, which is used to detect athletic parameters). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Manchi in view of Thomson and Faris in order to further modify the method for an object detection feature from the teachings of Manchi with the method for spatial linking navigation from the teachings of Thomson and the method for utilize an imaging device to capture input from an electronic tag of a physical asset from the teachings of Faris. One of ordinary skill in the art would have been motivated because it would allow to scan the electronic tag in order to obtain the unique identification number (Faris – 0100). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TANIA M PENA-SANTANA whose telephone number is (571)270-0627. The examiner can normally be reached Monday - Friday 8am to 4pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas R Taylor can be reached at 5712723889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TANIA M PENA-SANTANA/ Examiner, Art Unit 2443 /NICHOLAS R TAYLOR/Supervisory Patent Examiner, Art Unit 2443
Read full office action

Prosecution Timeline

Apr 12, 2024
Application Filed
Aug 04, 2025
Non-Final Rejection — §103
Feb 03, 2026
Response Filed
Mar 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592924
SMART HUB QUANTUM KEY DISTRIBUTION AND SECURITY MANAGEMENT IN ADVANCED NETWORKS
2y 5m to grant Granted Mar 31, 2026
Patent 12585754
TRUSTED ROOT RECOVERY
2y 5m to grant Granted Mar 24, 2026
Patent 12574343
SYSTEMS AND METHODS FOR MULTI-AGENT CONVERSATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12574260
CONSENSUS PROCESSING METHOD, APPARATUS, AND SYSTEM FOR BLOCKCHAIN NETWORK, DEVICE, AND MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12561477
AUTOMATED SPARSITY FEATURE SELECTION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
66%
With Interview (-6.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 245 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month