Prosecution Insights
Last updated: April 19, 2026
Application No. 18/800,312

REVERSE PASS-THROUGH GLASSES FOR AUGMENTED REALITY AND VIRTUAL REALITY DEVICES

Non-Final OA §DP
Filed
Aug 12, 2024
Examiner
TSENG, CHARLES
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Meta Platforms Technologies, LLC
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
541 granted / 686 resolved
+16.9% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
20 currently pending
Career history
706
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 686 resolved cases

Office Action

§DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 2, 5-10, 12, 14, 15 and 19 are objected to because of the following informalities: For claim 1, Examiner believes this claim should be amended in the following manner: A computer-implemented method, comprising: receiving, from one or more headset cameras, multiple images having at least two or more fields of view of a subject; extracting image features from the multiple images using a set of learnable weights; forming a three-dimensional model of the subject using the set of learnable weights; mapping the three-dimensional model of the subject onto an autostereoscopic display format that associates an image projection of the subject with a selected observation point for an onlooker; and providing, on a device display, the image projection of the subject when the onlooker is located at the selected observation point. For claim 2, Examiner believes this claim should be amended in the following manner: The computer-implemented method of claim 1, wherein extracting the image features comprises extracting intrinsic properties of a headset camera used to collect each of the multiple images. For claim 5, Examiner believes this claim should be amended in the following manner: The computer-implemented method of claim 1, wherein mapping the three-dimensional model of the subject onto the autostereoscopic display format comprises concatenating multiple feature maps produced by each of the one or more headset cameras in a permutation invariant combination, each of the one or more headset cameras having an intrinsic characteristic. For claim 6, Examiner believes this claim should be amended in the following manner: The computer-implemented method of claim 1, wherein providing the image projection of the subject comprises providing, on the device display, a second image projection as the onlooker moves from a first observation point to a second observation point in the multiple images. For claim 7, Examiner believes this claim should be amended in the following manner: The computer-implemented method of claim 1, wherein each of the multiple images is associated with a camera view vector indicating a direction of view of a face of the subject. For claim 8, Examiner believes this claim should be amended in the following manner: A system, comprising: one or more processors; and a memory storing instructions which, when executed by the one or more processors, cause the system to: receive, from one or more headset cameras, multiple images having at least two or more fields of view of a subject; extract image features from the multiple images using a set of learnable weights; form a three-dimensional model of the subject using the set of learnable weights; map the three-dimensional model of the subject onto an autostereoscopic display format that associates an image projection of the subject with a selected observation point for an onlooker; and provide, on a device display, the image projection of the subject when the onlooker is located at the selected observation point. For claim 9, Examiner believes this claim should be amended in the following manner: The system of claim 8, wherein the one or more processors further execute instructions to extract intrinsic properties of a headset camera used to collect each of the multiple images. For claim 10, Examiner believes this claim should be amended in the following manner: The system of claim 8, wherein the one or more processors further execute instructions to interpolate a first feature map associated with a first observation point with a second feature map associated with a second observation point. For claim 12, Examiner believes this claim should be amended in the following manner: The system of claim 8, wherein the one or more processors further execute instructions to concatenate multiple feature maps produced by each of the one or more headset cameras in a permutation invariant combination, each of the one or more headset cameras having an intrinsic characteristic. For claim 14, Examiner believes this claim should be amended in the following manner: The system of claim 8, wherein each of the multiple images is associated with a camera view vector indicating a direction of view of a face of the subject. For claim 15, Examiner believes this claim should be amended in the following manner: A headset, comprising: cameras configured to collect multiple images having at least two or more fields of view of at least a portion of a face of a subject; electronic component configured to: extract image features from the multiple images using a set of learnable weights; form a three-dimensional model of the subject using the set of learnable weights; and map the three-dimensional model of the subject onto an autostereoscopic display format that associates an image projection of the subject with a selected observation point for an onlooker; and a display configured to provide, based on the image projection, an autostereoscopic rendering of the portion of the face of the subject to the onlooker when the onlooker is located at the selected observation point. For claim 19, Examiner believes this claim should be amended in the following manner: The headset of claim 15, wherein the electronic component is further configured to extract intrinsic properties of a headset camera used to collect each of the multiple images. Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-15 and 17-19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 10-14 of U.S. Patent No. 12,131,416 in view of Imoto et al. (U.S. Patent Application Publication 2016/0018655 A1) (made of record of the IDS submitted 9/13/2024). The following is a claim comparison of claims 1-15 and 17-19 of the instant application and claims 10-14 of U.S. Patent No. 12,131,416. Application No. 18/800,312 U.S. Patent No. 12,131,416 1. A computer-implemented method, comprising: receiving, from one or more headset cameras, multiple images having at least two or more fields of view of a subject; extracting image features from the images using a set of learnable weights; forming a three-dimensional model of the subject using the set of learnable weights; mapping the three-dimensional model of the subject onto an autostereoscopic display format that associates an image projection of the subject with a selected observation point for an onlooker; and providing, on a device display, the image projection of the subject when the onlooker is located at the selected observation point. 10. A system, comprising: a memory storing multiple instructions; and one or more processors configured to execute the instructions to cause the system to perform operations, comprising: receive multiple two-dimensional images having at least two or more fields of view of a subject; generate predicted features of pixels along a target direction based on the at least two or more fields of view; generate a summarized feature vector based on information associated with a camera used to collect the two-dimensional images; extract multiple image features from the two-dimensional images using a set of learnable weights; project the image features along a direction between a three-dimensional model of the subject and a selected observation point for a viewer based on the summarized feature vector, wherein projecting the image features includes concatenating multiple feature maps produced by each of multiple cameras, each of the multiple cameras having an intrinsic characteristic; and provide, to the viewer, an autostereoscopic image of the three-dimensional model of the subject based on the predicted features and a projection of the image features along the direction. 2 11 3 12 4 13 5 14 6 12 7 10 8. A system, comprising: one or more processors; and a memory storing instructions which, when executed by the one or more processors, cause the system to: receive, from one or more headset cameras, multiple images having at least two or more fields of view of a subject; extract image features from the images using a set of learnable weights; form a three-dimensional model of the subject using the set of learnable weights; map the three-dimensional model of the subject onto an autostereoscopic display format that associates an image projection of the subject with a selected observation point for an onlooker; and provide, on a device display, the image projection of the subject when the onlooker is located at the selected observation point. 10. A system, comprising: a memory storing multiple instructions; and one or more processors configured to execute the instructions to cause the system to perform operations, comprising: receive multiple two-dimensional images having at least two or more fields of view of a subject; generate predicted features of pixels along a target direction based on the at least two or more fields of view; generate a summarized feature vector based on information associated with a camera used to collect the two-dimensional images; extract multiple image features from the two-dimensional images using a set of learnable weights; project the image features along a direction between a three-dimensional model of the subject and a selected observation point for a viewer based on the summarized feature vector, wherein projecting the image features includes concatenating multiple feature maps produced by each of multiple cameras, each of the multiple cameras having an intrinsic characteristic; and provide, to the viewer, an autostereoscopic image of the three-dimensional model of the subject based on the predicted features and a projection of the image features along the direction. 9 11 10 12 11 13 12 14 13 12 14 10 15. A headset, comprising: cameras configured to collect multiple images having at least two or more fields of view of at least a portion of a face of a subject; electronic component configured to: extract image features from the images using a set of learnable weights; form a three-dimensional model of the subject using the set of learnable weights; and map the three-dimensional model of the subject onto an autostereoscopic display format that associates an image projection of the subject with a selected observation point for an onlooker; and a display configured to provide, based on the image projection, an autostereoscopic rendering of the portion of the face of the subject to the onlooker when the onlooker is located at the selected observation point. 10. A system, comprising: a memory storing multiple instructions; and one or more processors configured to execute the instructions to cause the system to perform operations, comprising: receive multiple two-dimensional images having at least two or more fields of view of a subject; generate predicted features of pixels along a target direction based on the at least two or more fields of view; generate a summarized feature vector based on information associated with a camera used to collect the two-dimensional images; extract multiple image features from the two-dimensional images using a set of learnable weights; project the image features along a direction between a three-dimensional model of the subject and a selected observation point for a viewer based on the summarized feature vector, wherein projecting the image features includes concatenating multiple feature maps produced by each of multiple cameras, each of the multiple cameras having an intrinsic characteristic; and provide, to the viewer, an autostereoscopic image of the three-dimensional model of the subject based on the predicted features and a projection of the image features along the direction. 17 10 18 12 19 11 Claims 1-15 and 17-19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 10-14 of U.S. Patent No. 12,131,416 in view of Imoto et al. (U.S. Patent Application Publication 2016/0018655 A1). For independent claim 1, claim 10 of U.S. Patent No. 12,131,416 does not disclose a computer implemented with a headset and a device display for providing an image to an onlooker. However, these limitations are well-known in the art as disclosed in Imoto et al. (U.S. Patent Application Publication 2016/0018655 A1). It would have been obvious to a person having ordinary skill in the art to implement a computer with a head mounted display as a headset with a device display for providing an image to an outside person as an onlooker to appropriately understand the thoughts and intentions of a user wearing the headset (Figs. 11-12; par. 184 and 218) as taught in Imoto et al. (U.S. Patent Application Publication 2016/0018655 A1). Claim 10 of U.S. Patent No. 12,131,416 otherwise recites identical limitations of claim 1 as shown in the claim chart above. Thus, claim 1 of the instant application is not patentably distinct from claim 10 of U.S. Patent No. 12,131,416. For dependent claims 2-7, claims 10-14 of U.S. Patent No. 12,131,416 mirror and recite the limitations of claims 2-7 as set forth in the claim chart above. Thus, claims 2-7 of the instant application are not patentably distinct from claims 10-14 of U.S. Patent No. 12,131,416. For independent claim 8, claim 10 of U.S. Patent No. 12,131,416 does not disclose a system implemented with a headset and a device display for providing an image to an onlooker. However, these limitations are well-known in the art as disclosed in Imoto et al. (U.S. Patent Application Publication 2016/0018655 A1). It would have been obvious to a person having ordinary skill in the art to implement a system with a head mounted display as a headset with a device display for providing an image to an outside person as an onlooker to appropriately understand the thoughts and intentions of a user wearing the headset (Figs. 11-12; par. 184 and 218) as taught in Imoto et al. (U.S. Patent Application Publication 2016/0018655 A1). Claim 10 of U.S. Patent No. 12,131,416 otherwise recites identical limitations of claim 8 as shown in the claim chart above. Thus, claim 8 of the instant application is not patentably distinct from claim 10 of U.S. Patent No. 12,131,416. For dependent claims 9-14, claims 10-14 of U.S. Patent No. 12,131,416 mirror and recite the limitations of claims 9-14 as set forth in the claim chart above. Thus, claims 9-14 of the instant application are not patentably distinct from claims 10-14 of U.S. Patent No. 12,131,416. For independent claim 15, claim 10 of U.S. Patent No. 12,131,416 does not disclose a headset collecting information of a face and an electronic component with a display for providing an image to an onlooker. However, these limitations are well-known in the art as disclosed in Imoto et al. (U.S. Patent Application Publication 2016/0018655 A1). It would have been obvious to a person having ordinary skill in the art to implement a headset with sensors for collecting information of a face and an electronic component with a display for providing an image to an outside person as an onlooker to appropriately understand the thoughts and intentions of a user wearing the headset (Figs. 8 and 11-12; par. 147, 184 and 211) as taught in Imoto et al. (U.S. Patent Application Publication 2016/0018655 A1). Claim 10 of U.S. Patent No. 12,131,416 otherwise recites identical limitations of claim 15 as shown in the claim chart above. Thus, claim 15 of the instant application is not patentably distinct from claim 10 of U.S. Patent No. 12,131,416. For dependent claims 17-19, claims 10-12 of U.S. Patent No. 12,131,416 mirror and recite the limitations of claims 17-19 as set forth in the claim chart above. Thus, claims 17-19 of the instant application are not patentably distinct from claims 10-12 of U.S. Patent No. 12,131,416. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES TSENG whose telephone number is (571)270-3857. The examiner can normally be reached 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES TSENG/ Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Aug 12, 2024
Application Filed
Mar 09, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594021
EDITING METHOD OF DYNAMIC SPECTRUM PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12591405
SHARED CONTROL OF A VIRTUAL OBJECT BY MULTIPLE DEVICES
2y 5m to grant Granted Mar 31, 2026
Patent 12579760
DIGITAL CONTENT PLATFORM INCLUDING METHODS AND SYSTEM FOR RECORDING AND STORING DIGITAL CONTENT
2y 5m to grant Granted Mar 17, 2026
Patent 12572015
TRANSPARENT OPTICAL MODULE USING PIXEL PATCHES AND ASSOCIATED LENSLETS
2y 5m to grant Granted Mar 10, 2026
Patent 12566503
REPRESENTATION FORMAT FOR HAPTIC OBJECT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+32.1%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 686 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month