Prosecution Insights
Last updated: April 19, 2026
Application No. 18/513,643

3D PROJECTION METHOD AND 3D PROJECTION DEVICE

Final Rejection §103
Filed
Nov 20, 2023
Examiner
WERNER, DAVID N
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Coretronic Corporation
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
84%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
483 granted / 713 resolved
+9.7% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
32 currently pending
Career history
745
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
23.1%
-16.9% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 713 resolved cases

Office Action

§103
DETAILED ACTION This Office action for U.S. Patent Application No. 18/513,643 is responsive to communications filed 25 November 2025, in reply to the Non-Final Rejection of 26 June 2025. Claims 1, 2, 4–12, and 14–20 are pending. In the previous Office action, claims 1–20 were rejected under 35 U.S.C. § 103 as obvious over U.S. Patent Application Publication No. 2017/0099484 A1 (“Mashitani”) in view of U.S. Patent Application Publication No. 2013/0315472 A1 (“Hattori”). Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed with respect to representative claim 1 have been fully considered but they are not persuasive. With respect to the argument that Hattori is incompatible with the claimed invention because Hattori, directed to glasses-free display technology, is allegedly incompatible with sequential left and right images (25 November 2025 “REMARKS” (“Rem.”) 11–14), first, Mashitani, not Hattori, was cited as teaching the sequential image projection. 26 June 2025 Non-Final Rejection at 3. Applicant is reminded that one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 U.S.P.Q. 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 U.S.P.Q. 375 (Fed. Cir. 1986). Also, Applicant has not met the burden of proof for the premise of the argument that glasses-free technology teaches away from sequential stereoscopic images. M.P.E.P. §§ 2145(I), 715.01(c)(II) (“Arguments presented by the applicant cannot take place of evidence in the record”). With respect to the argument that Mashitani does not teach the new limitation (Rem. 14–16), the examiner notes that the claimed four pixels in the 2x2 pixel array are claimed as components of “pixel groups of the fusion image”, and only “from” the two eye images. As such, Applicant has failed to allege an actual difference from the claimed invention and the admitted Mashitani “single image” (Rem. 16). With respect to the allegation that one of ordinary skill in the art would not not combine the Mashitani display with glasses and the Hattori naked-eye display (Rem. 16), Hattori was cited for the limited purpose of fusing two input images. Non-Final Rejection at 4. Considering this, the alleged difference in how Mashitani and Hattori display the images is not relevant. Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4–12, and 14–20 are rejected under 35 U.S.C. § 103 as being unpatentable over U.S. Patent Application Publication No. 2017/0099484 A1 (“Mashitani”) in view of U.S. Patent Application Publication No. 2013/0315472 A1 (“Hattori”). Mashitani, directed to a video display projector, teaches with respect to claim 1 a 3D projection method, adapted to a 3D projection device (¶ 0002, “projection video display that has a three-dimensional (3D) video display function”), comprising steps of: obtaining a first eye image and a second eye image by the 3D projection device (¶ 0116, receiving combined left-eye and right-eye video signals) . . ., [wherein] each of the plurality of first pixel groups comprises a first pixel, a second pixel, a third pixel, and a fourth pixel (Figs. 6–7, four subframes); wherein the first pixel, the second pixel, the third pixel, and the fourth pixel in each of the plurality of pixel groups of the fusion image are arranged into a 2x2 pixel array (¶ 0061, “video signal generator handles four (2x2) pixels as a single block”), the first pixel and the third pixel in each of the plurality of pixel groups are from the first eye image (Figs. 6, 11–14, upper-left and lower right pixels in each 2x2 block form the left-eye sub-frames), and the second pixel and the fourth pixel in each of the plurality of pixel groups are from the second eye image (id., upper right and lower left pixels in each 2x2 block form the right-eye subframes); generating a first projection image by the 3D projection device based on the first pixels of the plurality of pixel groups (Fig. 13, ¶ 0085; displaying subframe L1 as a left-eye image); generating a second projection image by the 3D projection device based on the second pixels of the plurality of pixel groups (id., displaying subframe R1 as a right-eye image); generating a third projection image by the 3D projection device based on the third pixels of the plurality of pixel groups (id., displaying subframe L2 as a left-eye image); generating a fourth projection image by the 3D projection device based on the fourth pixels of the plurality of pixel groups (id., displaying subframe R1 as a right-eye image); and sequentially projecting the first projection image, the second projection image, the third projection image, and the fourth projection image by the 3D projection device (Fig. 13, sequentially projecting subframes L1, R1, L2, and R2), wherein the first projection image and the third projection image correspond to the first eye image (id., left-eye images L1 and L2), the second projection image and the fourth projection image correspond to the second eye image (id., right-eye images R1 and R2), the first image is one of a left eye image and a right eye image (id., left-eye images), and the second eye image is another one of the left eye image and the right eye image (id., right-eye images). The claimed invention differs from Mashitani in that the claimed invention further specifies forming a fusion image of the first and second eye images. Mashitani does not teach this material. However, Hattori, directed to three-dimensional image processing, teaches with respect to claim 1: forming a fusion image of the first eye image and the second eye image (Fig. 4, forming 2D and 3D image; Fig. 6, forming neutral viewpoint image from input parallax image pairs), wherein the fusion image comprises a plurality of pixel groups (¶¶ 0186, 194; operation on macroblock basis). It would have been obvious to one of ordinary skill in the art at the time of effective filing to modify Mashitani to produce additional or alternative 2D images from the left eye and right eye video signals, as taught by Hattori, to allow users to choose whether to view the images in 3D or 2D. Hattori ¶ 0157. Regarding claim 2, Mashitani in view of Hattori teaches the 3D projection method according to claim 1, wherein the first pixel and the third pixel in each of the plurality of first pixel groups and each of the plurality of second pixel groups are arranged along a first diagonal direction (Figs. 6, 11–14; upper left and lower right pixels in each 2x2 block form the left-eye sub-frames), and the second pixel and the fourth pixel in each of the plurality of first pixel groups and each of the plurality of second pixel groups are arranged along a second diagonal direction perpendicular to the first diagonal direction (id., upper right and lower left pixels in each 2x2 block form the right-eye subframes). Regarding claim 4, Mashitani in view of Hattori teaches the 3D projection method according to claim 1, wherein the plurality of pixel groups comprise a first pixel group (Mashitani Figs. 7–8, first subframe), the first pixel in the first pixel group has a first coordinate in the fusion image (Hattori ¶¶ 0134–135, position m′ of pixel in virtual viewpoint image), the first eye image comprises a plurality of first eye pixels (Fig. 6, parallax image or viewpoint image comprising a plurality of pixels), and the method comprises: finding a first reference eye pixel from the plurality of first eye pixels (¶ 0134, position m in parallax image), wherein the first reference eye pixel corresponds to the first coordinate in the first eye image (¶ 0135, corresponding positions between positions m′ and m); and setting the first pixel in the first pixel group to correspond to the first reference eye pixel (id., transform from m to m′). Regarding claim 5, all things equal to claim 4, Hattori is shown in Figure 6 to operate to transform both parallax images to the virtual viewpoint image, corresponding to performing the claim 4 method on the second eye pixels as claimed. Regarding claim 6, Mashitani teaches the 3D projection method according to claim 1, wherein the step of sequentially projecting the first projection image, the second projection image, the third projection image, and the fourth projection image comprises: shifting the second projection image into a first position along a second direction by controlling an image shifting device of the 3D projection device (Fig. 7, ¶ 0062; shifting second sub-frame right by a half pixel); shifting the third projection image to a third position along a third direction by controlling the image shifting device of the 3D projection device (id., shifting third sub-frame down and to the right by a half pixel); and shifting the fourth projection image to a fourth position along a fourth direction by controlling the image shifting device of the 3D projection device (id., shifting fourth sub-frame down by a half pixel). The claimed invention differs from Mashitani in that the claimed invention specifies shifting the first projection image to a first position along a first direction by controlling the image shifting device of the 3D projection device. In Mashitani, the first subframe is not shifted from its location. However, the claimed invention is equivalent to measuring the original location of the subframes down and right by a quarter pixel, and shifting the first through fourth subframes up and left by a quarter pixel, up and right by a quarter pixel, down and right by a quarter pixel, and down and left by a quarter pixel, and as such does not form a patentable distinction from the claimed invention. See M.P.E.P. § 2183 (equivalence). Regarding claim 7, Mashitani makes obvious the 3D projection method according to claim 6, wherein the [four] projection image[s] have a same shifted distance (Fig. 7; claimed invention is equivalent to this process as a set of quarter-pixel diagonal shifts if measured from an initial position down and to the right from the Mashitani pixel locations before shifting). Regarding claim 8, Mashitani makes obvious the 3D projection method according to claim 6, wherein the second direction is perpendicular to the first direction (Fig. 7, claimed invention is equivalent to the non-shift of the first subframe and the half-pixel right shift of the second subframe as a quarter pixel shift of the first subframe up and left and a quarter pixel shift of the second subframe up and left, as measured from a quarter pixel down and right of the original pixel locations), the third direction is opposite to the first direction (id., relative positions of first and shifted third subframes along same upper left to lower right diagonal), and the fourth direction is opposite to the second direction (id., relative positions of shifted second and shifted fourth subframes along same upper right to lower left diagonal). Regarding claim 9, Mashitani makes obvious the 3D projection method according to claim 6, further comprising a step of: shifting the first projection image from a preset position to the first position along the first direction, shifting the second projection image from the preset position to the second position along the second direction, shifting the third projection image from the preset position to the third position along the third direction, and shifting the fourth projection image from the preset position to the fourth position along the fourth direction by controlling the image shifting device of the 3D projection device (Fig. 7, claimed invention is equivalent to the initial positions a1, b1, c1, and d1 being one quarter pixel down and right). Regarding claim 10, Mashitani teaches the 3D projection method according to claim 6, further comprising a step of: in response to the first projection image being shifted to the first position, controlling a pair of 3D glasses to enable a first lens corresponding to the first eye and disable a second lens corresponding to the second eye (¶¶ 0080–81, synchronizing the left-eye and right-eye frames with liquid crystal shutter glasses); in response to the second projection image being shifted to the second position, controlling the pair of 3D glasses to disable the first lens corresponding to the first eye and enable the second lens corresponding to the second eye (id.); in response to the third projection image being shifted to the third position, controlling the pair of 3D glasses to enable the first lens corresponding to the first eye and disable the second lens corresponding to the second eye (id.); and in response to the fourth projection image being shifted to the fourth position, controlling the pair of 3D glasses to disable the first lens corresponding to the first eye and enable the second lens corresponding to the second eye (id.). Regarding claim 11, Mashitani in view of Hattori teaches a 3D projection device comprising: an image processing device (Mashitani Figs. 1–2A, projection video display 100), configured to perform: [the claim 1 method] (claim 1 rejection supra). Regarding claim 12, Mashitani in view of Hattori teaches teaches the 3D projection device according to claim 8, wherein the first pixel and the third pixel in each of the plurality of first pixel groups and each of the plurality of second pixel groups are arranged along a first diagonal direction (Figs. 6, 11–14; upper left and lower right pixels in each 2x2 block form the left-eye sub-frames), and the second pixel and the fourth pixel in each of the plurality of first pixel groups and each of the plurality of second pixel groups are arranged along a second diagonal direction perpendicular to the first diagonal direction (id., upper right and lower left pixels in each 2x2 block form the right-eye subframes). Regarding claim 14, Mashitani in view of Hattori teaches the 3D projection device according to claim 11, wherein the plurality of pixel groups comprise a first pixel group (Mashitani Figs. 7–8, first subframe), the first pixel in the first pixel group has a first coordinate in the fusion image (Hattori ¶¶ 0134–135, position m′ of pixel in virtual viewpoint image), the first eye image comprises a plurality of first eye pixels (Fig. 6, parallax image or viewpoint image comprising a plurality of pixels), and the method comprises: finding a first reference eye pixel from the plurality of first eye pixels (¶ 0134, position m in parallax image), wherein the first reference eye pixel corresponds to the first coordinate in the first eye image (¶ 0135, corresponding positions between positions m′ and m); and setting the first pixel in the first pixel group to correspond to the first reference eye pixel (id., transform from m to m′). Regarding claim 15, all things equal to claim 14, Hattori is shown in Figure 6 to operate to transform both parallax images to the virtual viewpoint image, corresponding to performing the claim 4 method on the second eye pixels as claimed. Regarding claim 16, Mashitani in view of Hattori teaches the 3D projection device according to claim 11, wherein the image shifting device is configured to perform: shifting the second projection image into a first position along a second direction by controlling an image shifting device of the 3D projection device (Fig. 7, ¶ 0062; shifting second sub-frame right by a half pixel); shifting the third projection image to a third position along a third direction by controlling the image shifting device of the 3D projection device (id., shifting third sub-frame down and to the right by a half pixel); and shifting the fourth projection image to a fourth position along a fourth direction by controlling the image shifting device of the 3D projection device (id., shifting fourth sub-frame down by a half pixel). The claimed invention differs from Mashitani in that the claimed invention specifies shifting the first projection image to a first position along a first direction by controlling the image shifting device of the 3D projection device. In Mashitani, the first subframe is not shifted from its location. However, the claimed invention is equivalent to measuring the original location of the subframes down and right by a quarter pixel, and shifting the first through fourth subframes up and left by a quarter pixel, up and right by a quarter pixel, down and right by a quarter pixel, and down and left by a quarter pixel, and as such does not form a patentable distinction from the claimed invention. See M.P.E.P. § 2183 (equivalence). Regarding claim 17, Mashitani in view of Hattori teaches the 3D projection device according to claim 16, wherein the [four] projection image[s] have a same shifted distance (Fig. 7; claimed invention is equivalent to this process as a set of quarter-pixel diagonal shifts if measured from an initial position down and to the right from the Mashitani pixel locations before shifting). Regarding claim 18, Mashitani in view of Hattori teaches the 3D projection device according to claim 16, wherein the second direction is perpendicular to the first direction (Fig. 7, claimed invention is equivalent to the non-shift of the first subframe and the half-pixel right shift of the second subframe as a quarter pixel shift of the first subframe up and left and a quarter pixel shift of the second subframe up and left, as measured from a quarter pixel down and right of the original pixel locations), the third direction is opposite to the first direction (id., relative positions of first and shifted third subframes along same upper left to lower right diagonal), and the fourth direction is opposite to the second direction (id., relative positions of shifted second and shifted fourth subframes along same upper right to lower left diagonal). Regarding claim 19, Mashitani makes obvious the 3D projection device according to claim 16, wherein the image shifting device is configured to perform: shifting the first projection image from a preset position to the first position along the first direction, shifting the second projection image from the preset position to the second position along the second direction, shifting the third projection image from the preset position to the third position along the third direction, and shifting the fourth projection image from the preset position to the fourth position along the fourth direction by controlling the image shifting device of the 3D projection device (Fig. 7, claimed invention is equivalent to the initial positions a1, b1, c1, and d1 being one quarter pixel down and right). Regarding claim 20, Mashitani in view of Hattori teaches the 3D projection device according to claim 16, wherein the image processing device is configured to perform: glasses to enable a first lens corresponding to the first eye and disable a second lens corresponding to the second eye (¶¶ 0080–81, synchronizing the left-eye and right-eye frames with liquid crystal shutter glasses); in response to the second projection image being shifted to the second position, controlling the pair of 3D glasses to disable the first lens corresponding to the first eye and enable the second lens corresponding to the second eye (id.); in response to the third projection image being shifted to the third position, controlling the pair of 3D glasses to enable the first lens corresponding to the first eye and disable the second lens corresponding to the second eye (id.); and in response to the fourth projection image being shifted to the fourth position, controlling the pair of 3D glasses to disable the first lens corresponding to the first eye and enable the second lens corresponding to the second eye (id.). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following prior art was found using an Artificial Intelligence assisted search using an internal AI tool that uses the classification of the application under the Cooperative Patent Classification (CPC) system, as well as from the specification, including the claims and abstract, of the application as contextual information. The documents are ranked from most to least relevant. Where possible, English-language equivalents are given, and redundant results within the same patent families are eliminated. See “New Artificial Intelligence Functionality in PE2E Search”, 1504 OG 359 (15 November 2022), “Automated Search Pilot Program”, 90 F.R. 48,161 (8 October 2025). US 2014/0300536 A1 TW 201504684 A US 2012/0044567 A1 US 2011/0074784 A1 CN 216086864 U US 2017/0237973 A1 US 2018/0130262 A1 US 2020/0209626 A1 THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 C.F.R. § 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 C.F.R. § 1.17(a)) pursuant to 37 C.F.R. § 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to David N Werner whose telephone number is (571)272-9662. The examiner can normally be reached M--F 7:30--4:00 Central. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dave Czekaj can be reached at 571.272.7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /David N Werner/Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Nov 20, 2023
Application Filed
Jun 24, 2025
Non-Final Rejection — §103
Nov 25, 2025
Response Filed
Feb 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598312
OVERHEAD REDUCTION IN MEDIA STORAGE AND TRANSMISSION
2y 5m to grant Granted Apr 07, 2026
Patent 12598297
METHOD AND APPARATUS FOR RECONSTRUCTING 360-DEGREE IMAGE ACCORDING TO PROJECTION FORMAT
2y 5m to grant Granted Apr 07, 2026
Patent 12593144
SOLID STATE IMAGING ELEMENT, IMAGING DEVICE, AND SOLID STATE IMAGING ELEMENT CONTROL METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12587754
METHOD FOR DYNAMIC CORRECTION FOR PIXELS OF THERMAL IMAGE ARRAY
2y 5m to grant Granted Mar 24, 2026
Patent 12587689
METHOD AND APPARATUS FOR RECONSTRUCTING 360-DEGREE IMAGE ACCORDING TO PROJECTION FORMAT
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
84%
With Interview (+16.2%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 713 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month