Prosecution Insights
Last updated: April 19, 2026
Application No. 19/045,043

DEVICE PAIRING USING MACHINE-READABLE OPTICAL LABEL

Non-Final OA §DP
Filed
Feb 04, 2025
Examiner
MESA, JOSE M
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Snap Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
86%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
401 granted / 575 resolved
+11.7% vs TC avg
Strong +16% interview lift
Without
With
+16.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
18 currently pending
Career history
593
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
29.3%
-10.7% vs TC avg
§112
5.1%
-34.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 575 resolved cases

Office Action

§DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Instant Application Patent No. 12,243,266 (Claim 1)1. A method comprising: accessing, at a first device, first pose data and metadata generated by a first Visual Inertial Odometry (VIO) system of a first device; encoding, at the first device, the first pose data and metadata in a machine-readable code; displaying an image of the machine-readable code in a display of the first device; receiving relative pose data indicative of a relative pose between the first device and a second device, the relative pose data being based on the first pose data of the first device and second pose data of the second device. (Claim 2) 2. The method of claim 1, wherein the relative pose data is received from the second device, wherein the second device is configured to: capture, using a camera of the second device, the image of the machine-readable code displayed in the display of the first device; access the second pose data from a second VIO system of the second device; decode, at the second device, the first pose data from the machine- readable code; and determine, at the second device, the relative pose data based on the first pose data and the second pose data. (Claim 3) 3. The method of claim 1, wherein the machine-readable code includes a QR code, an outer portion of the QR code comprising encoded first pose data and metadata, an inner portion of the QR code comprising a predefined second pattern having a predefined size with respect to the outer portion, wherein the metadata indicates calibrated relationship parameters of a location of a camera of the first device and the display of the first device. (Claim 4) 4. The method of claim 1, wherein the relative pose data is determined by: identifying a first reference coordinate frame of the first VIO system based on the first pose data; identifying a second reference coordinate frame of a second VIO system of the second device based on the second pose data; and detecting the relative pose based on an alignment of the first reference coordinate frame with the second reference coordinate frame. (Claim 5) 5. The method of claim 1, wherein the relative pose data is determined by: identifying a first reference coordinate frame of the first VIO system based on the first pose data; identifying a second reference coordinate frame of a second VIO system of the second device based on the second pose data; and forming a world reference coordinate system based on the first reference coordinate frame and the second reference coordinate frame. (Claim 6) 6. The method of claim 1, further comprising: displaying, in the display of the first device, a first virtual object based on the first pose data and the relative pose data. (Claim 7) 7. The method of claim 6, wherein the second device is configured to display a second virtual object based on the second pose data of the second device and the relative pose data, wherein the second virtual object corresponds to the first virtual object. (Claim 8) 8. The method of claim 1, wherein the first device is configured to generate a first timestamp corresponding to the first pose data of the first device, wherein the second device identifies a second timestamp in response to accessing the first pose data of the first device, and wherein the method comprises: determining that a difference between the first timestamp and the second timestamp is within a preset threshold, wherein the relative pose is determined in response to the difference between the first timestamp and the second timestamp being within the preset threshold. (Claim 9) 9. The method of claim 1, wherein the first device comprises a mobile phone, wherein the second device comprises a head-wearable device. (Claim 10) 10. The method of claim 1, wherein the first device is configured to operate a first augmented reality application based on the first pose data and the relative pose data, wherein the second device is configured to operate a second augmented reality application based on the second pose data and the relative pose data. (Claim 11) 11. A first device comprising: a processor; and a memory storing instructions that, when executed by the processor, configure the first device to perform operations comprising: accessing, at a first device, first pose data and metadata generated by a first Visual Inertial Odometry (VIO) system of the first device; encoding, at the first device, the first pose data and metadata in a machine-readable code; displaying an image of the machine-readable code in a display of the first device; receiving relative pose data indicative of a relative pose between the first device and a second device, the relative pose data being based on the first pose data of the first device and second pose data of the second device. (Claim 12) 12. The first device of claim 11, wherein the relative pose data is received from the second device, wherein the second device is configured to: capture, using a camera of the second device, the image of the machine-readable code displayed in the display of the first device; access the second pose data from a second VIO system of the second device; decode, at the second device, the first pose data from the machine- readable code; and determine, at the second device, the relative pose data based on the first pose data and the second pose data. (Claim 13) 13. The first device of claim 11, wherein the machine-readable code includes a QR code, an outer portion of the QR code comprising encoded first pose data and metadata, an inner portion of the QR code comprising a predefined second pattern having a predefined size with respect to the outer portion, wherein the metadata indicates calibrated relationship parameters of a location of a camera of the first device and the display of the first device. (Claim 14) 14. The first device of claim 11, wherein the relative pose data is determined by: identifying a first reference coordinate frame of the first VIO system based on the first pose data; identifying a second reference coordinate frame of a second VIO system of the second device based on the second pose data; and detecting the relative pose based on an alignment of the first reference coordinate frame with the second reference coordinate frame. (Claim 15) 15. The first device of claim 11, wherein the relative pose data is determined by: identifying a first reference coordinate frame of the first VIO system based on the first pose data; identifying a second reference coordinate frame of a second VIO system of the second device based on the second pose data; and forming a world reference coordinate system based on the first reference coordinate frame and the second reference coordinate frame. (Claim 16) 16. The first device of claim 11, further comprising: displaying, in the display of the first device, a first virtual object based on the first pose data and the relative pose data. (Claim 17) 17. The first device of claim 16, wherein the second device is configured to display a second virtual object based on the second pose data of the second device and the relative pose data, wherein the second virtual object corresponds to the first virtual object. (Claim 18) 18. The first device of claim 11, wherein the first device is configured to generate a first timestamp corresponding to the first pose data of the first device,4 wherein the second device identifies a second timestamp in response to accessing the first pose data of the first device, and wherein the method comprises: determining that a difference between the first timestamp and the second timestamp is within a preset threshold, wherein the relative pose is determined in response to the difference between the first timestamp and the second timestamp being within the preset threshold. (Claim 19) 19. The method of claim 1, wherein the first device is configured to operate a first augmented reality application based on the first pose data and the relative pose data, wherein the second device is configured to operate a second augmented reality application based on the second pose data and the relative pose data. (Claim 20) 20. A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to perform operations comprising: accessing, at a first device, first pose data and metadata generated by a first Visual Inertial Odometry (VIO) system of a first device; encoding, at the first device, the first pose data and metadata in a machine-readable code; displaying an image of the machine-readable code in a display of the first device; receiving relative pose data indicative of a relative pose between the first device and a second device, the relative pose data being based on the first pose data of the first device and second pose data of the second device. (Claim 1)1. A method comprising: accessing first pose data from a first Visual Inertial Odometry (VIO) system of a first device; accessing, using a camera of a first device, an image of a machine-readable code that is displayed on a display of a second device, the machine-readable code being encoded by the second device with second pose data from a second VIO system of the second device; decoding, at the first device, the second pose data from the machine-readable code; and determining, at the first device, a relative pose between the first device and the second device based on the first pose data and the second pose data. (Claim 1 above includes the claimed limitations of Clam 2 of the Instant Application) (Claim 2) 2. The method of claim 1, wherein the second device is configured to: encode the second pose data and metadata generated by the second VIO system with the machine-readable code; and display the image of the machine-readable code in the second display of the second device. (Claim 3) 3. The method of claim 2, wherein the machine-readable code includes a QR code, an outer portion of the QR code comprising encoded first pose data and metadata, an inner portion of the QR code comprising a predefined second pattern having a predefined size with respect to the outer portion, wherein the metadata indicates calibrated relationship parameters of a location of a camera of the second device and the display of the second device. (Claim 4) 4. The method of claim 1, wherein determining the relative pose further comprises: identifying a first reference coordinate frame of the first VIO system based on the first pose data; identifying a second reference coordinate frame of the second VIO system based on the second pose data; and detecting the relative pose based on an alignment of the first reference coordinate frame with the second reference coordinate frame. (Claim 5) 5. The method of claim 1, wherein determining the relative pose further comprises: identifying a first reference coordinate frame of the first VIO system based on the first pose data; identifying a second reference coordinate frame of the second VIO system based on the second pose data; and forming a world reference coordinate system based on the first reference coordinate frame and the second reference coordinate frame. (Claim 6) 6. The method of claim 1, further comprising: displaying, in a display of the first device, a first virtual object based on the first pose data of the first device and the relative pose. (Claim 7) 7. The method of claim 6, further comprising: communicating the relative pose to the second device, and wherein the second device is configured to display a second virtual object based on the second pose data of the second device and the relative pose, wherein the second virtual object corresponds to the first virtual object. (Claim 8) 8. The method of claim 1, wherein the first device is configured to identify a first timestamp corresponding to the first pose data of the first device, wherein the second device generates a second timestamp in response to accessing the second pose data of the second device, and wherein the method comprises: determining that a difference between the first timestamp and the second timestamp is within a preset threshold; and in response to the difference between the first timestamp and the second timestamp being within the preset threshold, determining the relative pose between the first device and the second device. (Claim 9) 9. The method of claim 1, wherein the first device comprises a wearable device with a transparent display, the wearable device configured to operate a first augmented reality application. (Claims 1 and 9 above and Claim 10 below include the claimed limitations of Clam 10 of the Instant Application) (Claim 10) 10. The method of claim 1, wherein the second device comprises a handheld mobile device with a non-transparent touchscreen, the handheld mobile device configured to operate a second augmented reality application. (Claim 11) 11. A computing apparatus comprising: a processor; and a memory storing instructions that, when executed by the processor, configure the apparatus to: access first pose data from a first Visual Inertial Odometry (VIO) system of a first device; access, using a camera of a first device, an image of a machine-readable code that is displayed on a display of a second device, the machine-readable code being encoded by the second device with second pose data from a second VIO system of the second device; decode, at the first device, the second pose data from the machine-readable code; and determine, at the first device, a relative pose between the first device and the second device based on the first pose data and the second pose data. (Claim 11 above includes the claimed limitations of Clam 12 of the Instant Application) (Claim 12) 12. The computing apparatus of claim 11, wherein the second device is configured to: encode the second pose data and metadata generated by the second VIO system with the machine-readable code; and display the image of the machine-readable code in the second display of the second device. (Claim 13) 13. The computing apparatus of claim 12, wherein the machine-readable code includes a QR code, an outer portion of the QR code comprising encoded first pose data and metadata, an inner portion of the QR code comprising a predefined second pattern having a predefined size with respect to the outer portion, wherein the metadata indicates calibrated relationship parameters of a location of a camera of the second device and the display of the second device. (Claim 14) 14. The computing apparatus of claim 11, wherein determining the relative pose further comprises: identify a first reference coordinate frame of the first VIO system based on the first pose data; identify a second reference coordinate frame of the second VIO system based on the second pose data; and detect the relative pose based on an alignment of the first reference coordinate frame with the second reference coordinate frame. (Claim 15) 15. The computing apparatus of claim 11, wherein determining the relative pose further comprises: identify a first reference coordinate frame of the first VIO system based on the first pose data; identify a second reference coordinate frame of the second VIO system based on the second pose data; and form a world reference coordinate system based on the first reference coordinate frame and the second reference coordinate frame. (Claim 16) 16. The computing apparatus of claim 11, wherein the instructions further configure the apparatus to: display, in a display of the first device, a first virtual object based on the first pose data of the first device and the relative pose. (Claim 17) 17. The computing apparatus of claim 16, wherein the instructions further configure the apparatus to: communicate the relative pose to the second device, and wherein the second device is configured to display a second virtual object based on the second pose data of the second device and the relative pose, wherein the second virtual object corresponds to the first virtual object. (Claim 18) 18. The computing apparatus of claim 11, wherein the first device is configured to identify a first timestamp corresponding to the first pose data of the first device, wherein the second device generates a second timestamp in response to accessing the second pose data of the second device, and wherein the method comprises: determine that a difference between the first timestamp and the second timestamp is within a preset threshold; and in response to the difference between the first timestamp and the second timestamp being within the preset threshold, determine the relative pose between the first device and the second device. (Claim 19) 19. The computing apparatus of claim 11, wherein the first device comprises a wearable device with a transparent display, the wearable device configured to operate a first augmented reality application, wherein the second device comprises a handheld mobile device with a non-transparent touchscreen, the handheld mobile device configured to operate a second augmented reality application. (Claim 1)1. A method comprising: accessing first pose data from a first Visual Inertial Odometry (VIO) system of a first device; accessing, using a camera of a first device, an image of a machine-readable code that is displayed on a display of a second device, the machine-readable code being encoded by the second device with second pose data from a second VIO system of the second device; decoding, at the first device, the second pose data from the machine-readable code; and determining, at the first device, a relative pose between the first device and the second device based on the first pose data and the second pose data. (Claim 2) 2. The method of claim 1, wherein the second device is configured to: encode the second pose data and metadata generated by the second VIO system with the machine-readable code; and display the image of the machine-readable code in the second display of the second device (Claim 20) 20. A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to: access first pose data from a first Visual Inertial Odometry (VIO) system of a first device; access, using a camera of a first device, an image of a machine-readable code that is displayed on a display of a second device, the machine-readable code being encoded by the second device with second pose data from a second VIO system of the second device; decode, at the first device, the second pose data from the machine-readable code; and determine, at the first device, a relative pose between the first device and the second device based on the first pose data and the second pose data. Claims 1-8 and 10-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-19 of the Patent No. 12,243,266. Re claim 1, the conflicting claims are not patentably distinct from each other because claim 1 of the Instant Application is anticipated by claims 1-2 of the Patent No. 12,243,266. Re claim 2, the conflicting claims are not patentably distinct from each other because claim 2 of the Instant Application is recited in claim 1 of the Patent No. 12,243,266. Re claim 3, the conflicting claims are not patentably distinct from each other because claim 3 of the Instant Application is recited in claim 3 of the Patent No. 12,243,266. Re claim 4, the conflicting claims are not patentably distinct from each other because claim 4 of the Instant Application is recited in claim 4 of the Patent No. 12,243,266. Re claim 5, the conflicting claims are not patentably distinct from each other because claim 5 of the Instant Application is recited in claim 5 of the Patent No. 12,243,266. Re claim 6, the conflicting claims are not patentably distinct from each other because claim 6 of the Instant Application is recited in claim 6 of the Patent No. 12,243,266. Re claim 7, the conflicting claims are not patentably distinct from each other because claim 7 of the Instant Application is recited in claim 7 of the Patent No. 12,243,266. Re claim 8, the conflicting claims are not patentably distinct from each other because claim 8 of the Instant Application is recited in claim 8 of the Patent No. 12,243,266. Re claim 10, the conflicting claims are not patentably distinct from each other because claim 10 of the Instant Application is recited in claims 1, 9 and 10 of the Patent No. 12,243,266. Re claim 11, the conflicting claims are not patentably distinct from each other because claim 11 of the Instant Application is anticipated by claims 11-12 of the Patent No. 12,243,266. Re claim 12, the conflicting claims are not patentably distinct from each other because claim 12 of the Instant Application is recited in claim 11 of the Patent No. 12,243,266. Re claim 13, the conflicting claims are not patentably distinct from each other because claim 13 of the Instant Application is recited in claim 13 of the Patent No. 12,243,266. Re claim 14, the conflicting claims are not patentably distinct from each other because claim 14 of the Instant Application is recited in claim 14 of the Patent No. 12,243,266. Re claim 15, the conflicting claims are not patentably distinct from each other because claim 15 of the Instant Application is recited in claim 15 of the Patent No. 12,243,266. Re claim 16, the conflicting claims are not patentably distinct from each other because claim 16 of the Instant Application is recited in claim 16 of the Patent No. 12,243,266. Re claim 17, the conflicting claims are not patentably distinct from each other because claim 17 of the Instant Application is recited in claim 17 of the Patent No. 12,243,266. Re claim 18, the conflicting claims are not patentably distinct from each other because claim 18 of the Instant Application is recited in claim 18 of the Patent No. 12,243,266. Re claim 19, the conflicting claims are not patentably distinct from each other because claim 19 of the Instant Application is recited in claim 19 of the Patent No. 12,243,266. Re claim 20, the conflicting claims are not patentably distinct from each other because claim 20 of the Instant Application is anticipated by claims 11-12 of the Patent No. 12,243,266. Claim 9 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 9 of the Patent No. 12,243,266, and further in view of Rangaprasad et al. Pub. No. US 2021/0150295. Re claim 9, claim 9 of the Patent No. 12,243,266, recites each and every limitation of claim 9 of the Instant Application except for the limitation of “a head-wearable device.” However, the reference of Rangaprasad explicitly teaches “a head-wearable device” (see ¶ 40 for a head-wearable device (i.e. head mounted system)) Therefore, it would have been obvious before the effective filing date of the claimed invention to incorporate this feature (head-wearable device) taught by Rangaprasad into the method recited in claim 1 of Patent No. 12,243,266. One skilled in the art before the effective filing date of the claimed invention would have been motivated to incorporate the feature as taught by Rangaprasad above into the method recited in claim 1 of Patent No. 12,243,266 for the benefit of having a head mounted system that may have one or more speaker(s) and an integrated opaque display, alternatively, a head mounted system may be configured to accept an external opaque display (e.g., a smartphone), wherein the head mounted system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment in order to improve efficiency when using the head mounted system to capture images or video and audio (see ¶ 40) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSE M MESA whose telephone number is (571)270-1706. The examiner can normally be reached Monday-Friday 8:30AM-6:00PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Tran can be reached on 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 3/16/2026 /JOSE M. MESA/ Examiner Art Unit 2484 /THAI Q TRAN/Supervisory Patent Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Feb 04, 2025
Application Filed
Mar 20, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598333
DATA PROCESSING METHOD AND APPARATUS, AND DEVICE, STORAGE MEDIUM AND PROGRAM PRODUCT
2y 5m to grant Granted Apr 07, 2026
Patent 12598389
IMAGING DEVICE, SENSOR CHIP, AND PROCESSING CIRCUIT
2y 5m to grant Granted Apr 07, 2026
Patent 12597444
SYSTEMS AND METHODS FOR AUTOMATED DIGITAL EDITING
2y 5m to grant Granted Apr 07, 2026
Patent 12580004
VIDEO EDITING SUPPORT DEVICE, VIDEO EDITING SUPPORT METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12581156
DISPLAY APPARATUS AND RECORDING METHOD
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
86%
With Interview (+16.4%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 575 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month