Prosecution Insights
Last updated: April 19, 2026
Application No. 18/697,881

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM

Final Rejection §103§112
Filed
Apr 02, 2024
Examiner
DHILLON, PUNEET S
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Sony Group Corporation
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
232 granted / 281 resolved
+24.6% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
41 currently pending
Career history
322
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
24.9%
-15.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 281 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant(s) Response to Official Action The response filed on 10/16/2025 has been entered and made of record. Response to Arguments/Amendments Presented arguments have been fully considered, but are rendered moot in view of the new ground(s) of rejection necessitated by amendment(s) initiated by the applicant(s). Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-14 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 10-12, recite the following limitation: “… determine[ing], based on the specific [first] image of the user, first position information of a head of the user at a time of activation of content …” (emphasis added). This limitation appears to add a time element that is not found in the applicant’s specification. According to the Pre-Grant Publication, US 2025/0240397 A1 (see paragraph [0077]), the closest description of this limitation is described as the following: “For example, the determination unit 12 determines whether or not the difference between the position information of the head of the user when the content is activated and the position information of the head of the user when the user moves, … exceeds a threshold value (emphasis added).” Currently, the limitation adds a timing relationship to “content activation”, which differs from what is described in the applicant’s specification. For example, the specification implies that the content is already activated (when the content is activated) as opposed to the limitation (at a time of activation of content). Therefore, the limitation is interpreted as the following: “… determine[ing], based on the specific [first] image of the user, first position information of a head of the user when the content is activated …” (emphasis added). The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 8 recites the limitation "… perform inspection on a stereoscopic display at a specific timing; generate, based on each of the performed inspection and the first display parameter …" (emphasis added to accentuate insufficient antecedent basis). The limitation initially suggests a singular action (perform inspection) and then later suggests a plurality of the same action (each of the performed inspection). Therefore, the limitation lacks clarity. For the purposes of examination, claim 8 is interpreted as the following: “… perform inspection on a stereoscopic display at a specific timing; generate, based on the performed inspection and the first display parameter …”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over by Zomet (US 2013/0293576 A1) in view of Lee et al., hereinafter referred to as Lee (US 2018/0275400 A1). As per claim 1, Zomet discloses an information processing apparatus (Zomet: Abstract), comprising a central processing unit (CPU) configured to: receive a first image of a user (Zomet: Paras. [0025], [0028] disclose a processor (CPU) and memory that receives images and performs tasks.); determine, based on the first image of the user, first position information of a head of the user at a time of activation of content (Zomet: Paras. [0012], [0057] disclose identifying a location of an observer (head/viewpoint position) in relation to the display.); generate a confirmation image based on a viewpoint position of the user, (Zomet: Paras. [0028], [0056]-[0057], [0063] generating and displaying a confirmation image (calibration pattern) associated with crosstalk (ghosting artifacts) based on user input or location. Dynamically adjusting based on the user's viewpoint position.). However, Zomet does not explicitly disclose “… determine a difference between the first position information of the head of the user and second position information of the head of the user, wherein the second position information is associated with a movement of the user; measure a movement of a viewpoint of the user based on the determined difference; … and the measured movement of the viewpoint of the user that exceeds a specific threshold value …”. Further, Lee is in the same field of endeavor and teaches determine a difference between the first position information of the head of the user and second position information of the head of the user, wherein the second position information is associated with a movement of the user (Lee: Paras. [0047], [0051], [0054] disclose determining a difference between a stationary position (e.g., eye positions intended for rendering) of the user [first position information of the head of the user] and actual eye positions of the user [second position information of the head of the user], wherein the actual eye positions of the user are associated with a movement of the user.); measure a movement of a viewpoint of the user based on the determined difference (Lee: Paras. [0051], [0062] disclose measuring the range of movement based on whether the user is stationary or moving, based on the eye movement information.); and the measured movement of the viewpoint of the user that exceeds a specific threshold value (Lee: Paras. [0058], [0062] disclose determining if the range of movement is greater than the preset threshold.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Zomet and Lee before him or her, to modify the autostereoscopic display system of Zomet to include the difference determining measured movement position information threshold feature as described in Lee. The motivation for doing so would have been to improve user experience by providing a configuration that prevents image quality from being degraded due to crosstalk. As per claim 10, Zomet discloses an information processing method (Zomet: Abstract), comprising: in an information processing apparatus (computer): receiving a specific image of a user (Zomet: Paras. [0025], [0028] disclose a computer that receives images and performs tasks.); determining, based on the specific image of the user, first position information of a head of the user at a time of activation of content (Zomet: Paras. [0012], [0057] disclose identifying a location of an observer (head/viewpoint position) in relation to the display.); generating a confirmation image based on a viewpoint position of the user, (Zomet: Paras. [0028], [0056]-[0057], [0063] generating and displaying a confirmation image (calibration pattern) associated with crosstalk (ghosting artifacts) based on user input or location. Dynamically adjusting based on the user's viewpoint position.). However, Zomet does not explicitly disclose “… determining a difference between the first position information of the head of the user and second position information of the head of the user, wherein the second position information is associated with a movement of the user; measuring a movement of a viewpoint of the user based on the determined difference; … and the measured movement of the viewpoint of the user that exceeds a specific threshold value …”. Further, Lee is in the same field of endeavor and teaches determining a difference between the first position information of the head of the user and second position information of the head of the user, wherein the second position information is associated with a movement of the user (Lee: Paras. [0047], [0051], [0054] disclose determining a difference between a stationary position (e.g., eye positions intended for rendering) of the user [first position information of the head of the user] and actual eye positions of the user [second position information of the head of the user], wherein the actual eye positions of the user are associated with a movement of the user.); measuring a movement of a viewpoint of the user based on the determined difference (Lee: Paras. [0051], [0062] disclose measuring the range of movement based on whether the user is stationary or moving, based on the eye movement information.); and the measured movement of the viewpoint of the user that exceeds a specific threshold value (Lee: Paras. [0058], [0062] disclose determining if the range of movement is greater than the preset threshold.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Zomet and Lee before him or her, to modify the autostereoscopic display system of Zomet to include the difference determining measured movement position information threshold feature as described in Lee. The motivation for doing so would have been to improve user experience by providing a configuration that prevents image quality from being degraded due to crosstalk. As per claim 11, Zomet discloses a non-transitory computer-readable medium having stored thereon, computer-executable instructions which when executed by a computer, cause the computer to execute operations (Zomet: Abstract), the operations comprising: receiving a specific image of a user (Zomet: Paras. [0025], [0028] disclose a computer that receives images and performs tasks.); determining, based on the specific image of the user, first position information of a head of the user at a time of activation of content (Zomet: Paras. [0012], [0057] disclose identifying a location of an observer (head/viewpoint position) in relation to the display.); generating a confirmation image based on a viewpoint position of the user, (Zomet: Paras. [0028], [0056]-[0057], [0063] generating and displaying a confirmation image (calibration pattern) associated with crosstalk (ghosting artifacts) based on user input or location. Dynamically adjusting based on the user's viewpoint position.). However, Zomet does not explicitly disclose “… determining a difference between the first position information of the head of the user and second position information of the head of the user, wherein the second position information is associated with a movement of the user; measuring a movement of a viewpoint of the user based on the determined difference; … and the measured movement of the viewpoint of the user that exceeds a specific threshold value …”. Further, Lee is in the same field of endeavor and teaches determining a difference between the first position information of the head of the user and second position information of the head of the user, wherein the second position information is associated with a movement of the user (Lee: Paras. [0047], [0051], [0054] disclose determining a difference between a stationary position (e.g., eye positions intended for rendering) of the user [first position information of the head of the user] and actual eye positions of the user [second position information of the head of the user], wherein the actual eye positions of the user are associated with a movement of the user.); measuring a movement of a viewpoint of the user based on the determined difference (Lee: Paras. [0051], [0062] disclose measuring the range of movement based on whether the user is stationary or moving, based on the eye movement information.); and the measured movement of the viewpoint of the user that exceeds a specific threshold value (Lee: Paras. [0058], [0062] disclose determining if the range of movement is greater than the preset threshold.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Zomet and Lee before him or her, to modify the autostereoscopic display system of Zomet to include the difference determining measured movement position information threshold feature as described in Lee. The motivation for doing so would have been to improve user experience by providing a configuration that prevents image quality from being degraded due to crosstalk. As per claim 12, Zomet discloses an information processing system (Zomet: Abstract), comprising: a camera (image sensor) configured to capture an image of a user (Zomet: Para. [0057] discloses the location of the user may be identified in real using an image sensor.); an information processing apparatus including a central processing unit (CPU) configured to: receive the captured image of the user (Zomet: Paras. [0025], [0028] disclose a processor (CPU) and memory that receives images and performs tasks.); determine, based on the captured image of the user, first position information of a head of the user at a time of activation of content (Zomet: Paras. [0012], [0057] disclose identifying a location of an observer (head/viewpoint position) in relation to the display.); generate a confirmation image based on a viewpoint position of the user, (Zomet: Paras. [0028], [0056]-[0057], [0063] generating and displaying a confirmation image (calibration pattern) associated with crosstalk (ghosting artifacts) based on user input or location. Dynamically adjusting based on the user's viewpoint position.); and an image display apparatus configured to display the confirmation image (Zomet: Paras. [0006], [0025], [0056] disclose presenting/projecting (displaying) the calibration pattern (confirmation image) on the autostereoscopic display.). However, Zomet does not explicitly disclose “… determine a difference between the first position information of the head of the user and second position information of the head of the user, wherein the second position information is associated with a movement of the user; measure a movement of a viewpoint of the user based on the determined difference; … and the measured movement of the viewpoint of the user that exceeds a specific threshold value …”. Further, Lee is in the same field of endeavor and teaches determine a difference between the first position information of the head of the user and second position information of the head of the user, wherein the second position information is associated with a movement of the user (Lee: Paras. [0047], [0051], [0054] disclose determining a difference between a stationary position (e.g., eye positions intended for rendering) of the user [first position information of the head of the user] and actual eye positions of the user [second position information of the head of the user], wherein the actual eye positions of the user are associated with a movement of the user.); measure a movement of a viewpoint of the user based on the determined difference (Lee: Paras. [0051], [0062] disclose measuring the range of movement based on whether the user is stationary or moving, based on the eye movement information.); and the measured movement of the viewpoint of the user that exceeds a specific threshold value (Lee: Paras. [0058], [0062] disclose determining if the range of movement is greater than the preset threshold.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Zomet and Lee before him or her, to modify the autostereoscopic display system of Zomet to include the difference determining measured movement position information threshold feature as described in Lee. The motivation for doing so would have been to improve user experience by providing a configuration that prevents image quality from being degraded due to crosstalk. Claims 2-3, 8 are rejected under 35 U.S.C. 103 as being unpatentable over Zomet in view of Lee in further view of Baker (US 2012/0140225 A1). As per claim 2, Zomet-Lee disclose the information processing apparatus according to claim 1, wherein the confirmation image includes: a pattern (Zomet: Para. [0056] discloses evaluating the confirmation image (estimated undesired distortion). For example, the user may be presented with a calibration pattern that projected on the autostereoscopic display element.); However, Zomet-Lee do not explicitly disclose “… wherein the confirmation image includes a left-eye image that enters a left eye of the user, and a right-eye image that enters a right eye of the user and is different from the left-eye image.” Further, Baker is in the same field of endeavor and teaches wherein the confirmation image (10, 20) includes a left-eye image (10) that enters a left eye of the user, and a right-eye image (20) that enters a right eye of the user and is different from the left-eye image (Baker: Paras. [0012]-[0013] disclose confirmation image includes a left-eye image 10 that enters a left eye of the user, and a right-eye image (Fig. 1b, 20) that enters a right eye of the user and is different from the left-eye image (Fig. 1a, 10)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Zomet-Lee and Baker before him or her, to modify the autostereoscopic display system of Zomet-Lee to include the left-eye right-eye image feature as described in Baker. The motivation for doing so would have been to improve evaluation of L/R crosstalk in autostereoscopic display-based systems by providing advanced test pattern configurations to a viewer or evaluator. As per claim 3, Zomet-Lee disclose the information processing apparatus according to claim 2, wherein the left-eye image includes a pattern, the right-eye image includes a pattern, and the pattern of the confirmation image includes at least one of a position of an object, a luminance of the object, a depth of the object, or a shape of the object (Baker: Paras. [0012]-[0013] disclose the left-eye and right-eye images include a pattern (18, 26), each pattern having a 100% white strip 12, 22 as a reference strip or region adjacent a plurality of contiguous grey scale “chips” 14 a, 14 b, 14 c, 24 a, 24 b, 24 c that form a calibration rectangular strip or region as shown in figures 1a-b). As per claim 8, Zomet-Lee disclose the information processing apparatus according to claim 3, wherein the confirmation image is based on a first display parameter that is associated with a display of the pattern of the confirmation image, and the CPU is further configured to: perform inspection on a stereoscopic display at a specific timing; generate, based on each of the performed inspection and the first display parameter, one of the left-eye image or the right-eye image, and generate a second image of the user based on a second display parameter (Baker: Para. [0014] discloses: “More calibrated chips 14, 24 may be used for more resolution in quantifying the extinction ratio or leakage. The L/ R extinction pattern 18, 26 may be placed at different screen locations in cases where the uniformity of extinction is not constant across the display area … Animation or motion of the L/ R pattern 18, 26 may be added as well to validate perceived extinction ratio on active shutter or temporally multiplexed left and right stereoscopic displays for both static and moving images. [i.e., the same test pattern can be shown – once with a “first” set of parameters (e.g., at location A, or static), then – again with a “second” set of parameters (e.g., at location B, or animated)]”). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Zomet in view of Lee in further view of Wu et al., hereinafter referred to as Wu (US 2014/0062710 A1). As per claim 9, Zomet-Lee disclose the information processing apparatus according to claim 1, (Zomet: Paras. [0056]-[0057] disclose the user is presented with a calibration pattern [confirmation image] that is projected on the autostereoscopic display element.). However, Zomet-Lee do not explicitly disclose “… wherein the CPU is further configured to generate, based on the viewpoint position, a guide image, and the guide image guides the user to a specific position to observe the confirmation image …”. Further, Wu is in the same field of endeavor and teaches wherein the CPU is further configured to generate, based on the viewpoint position, a guide image, and the guide image guides the user to a specific position to observe the confirmation image (Wu: Para. [0019] discloses generating feedback 46, 48, 50 [guide images] based on the viewer’s viewpoint position to guide the viewer for an optimal viewing zone or position). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Zomet-Lee and Wu before him or her, to modify the autostereoscopic display system of Zomet-Lee to include the guide image feature as described in Wu. The motivation for doing so would have been to improve viewing experience by providing techniques that aid the viewer by reducing trial-and-error based approaches of the viewer moving to different viewing positions. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Zomet in view of Lee in further view of Guido et al., hereinafter referred to as Guido (US 2016/0277728 A1). As per claim 13, Zomet-Lee disclose the information processing system according to claim 12, However, Zomet-Lee do not explicitly disclose “… further comprising a mirror configured to reflect the confirmation image, wherein the camera is further configured to capture the reflected confirmation image, and the CPU is further configured to determine, based on the reflected confirmation image, each of an occurrence of the crosstalk and a degree of the crosstalk.”. Further, Guido is in the same field of endeavor and teaches further comprising a mirror configured to reflect the confirmation image, wherein the camera is further configured to capture the reflected confirmation image, and the CPU is further configured to determine, based on the reflected confirmation image, each of an occurrence of the crosstalk and a degree of the crosstalk (Guido: Paras. [0075]-[0077] disclose the pattern used for calibration [confirmation image] determines the level and occurrence of cross talk in the resulting image captured by the camera through the reflecting surface [mirror].). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Zomet-Lee and Guido before him or her, to modify the autostereoscopic display image camera system of Zomet-Lee to include the confirmation image reflected by a mirror feature as described in Guido. The motivation for doing so would have been to improve evaluation of calibration patterns in complex test environments by providing a simplified calibration process that requires fewer resources. Claims 4-7 are rejected under 35 U.S.C. 103 as being unpatentable over Zomet in view of Lee in view of Baker in further view of Harman (US 2002/0075384 A1). As per claim 4, Zomet-Lee disclose the information processing apparatus according to claim 3, wherein the CPU is further configured to determine, (Baker: Paras. [0013], [0015] disclose the viewer/evaluator alternatively closes each eye and the leakage strip 22′ overlays the calibrated chips 14 so as to allow a visual comparison of the leakage or ghost image to the nearest chip value 16 for each eye independently). Further, Harman is in the same field of endeavor and teaches whether the user is closing the left eye or the right eye based on the first image that includes the user (Harman: Paras. [0007], [0067]-[0068] disclose capturing the observer’s eyes via camera 1 and detects if the observer has closed their eyes). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Zomet-Lee-Baker and Harman before him or her, to modify the autostereoscopic display imaging system of Zomet-Lee-Baker to include the left eye right eye state based on a first image feature as described in Harman. The motivation for doing so would have been to improve autostereoscopic viewing systems by providing an advanced eye tracking configuration that enables greater location precision of the observer's eyes in the x, y and z directions. As per claim 5, Zomet-Lee-Baker disclose the information processing apparatus according to claim 4, wherein the CPU is further configured to generate the confirmation image based on the one of the closure of the left eve of the user or the closure of the right eve of the user, and a difference threshold value of the user (Baker: Paras. [0013], [0016] disclose the viewer/evaluator alternatively closes each eye and the leakage strip 22′ [difference threshold] (represent the percentage of extinction ratio or crosstalk) overlays the calibrated chips 14 so as to allow a visual comparison of the leakage or ghost image to the nearest chip value 16 for each eye independently as shown in figure 2.). As per claim 6, Zomet-Lee-Baker disclose the information processing apparatus according to claim 4, wherein the CPU is further configured to: generate the confirmation image including the pattern of the confirmation image; and determine, based on the pattern of the confirmation image, visual recognition associated with one of the left eye of the user or the right eye of the user (Baker: Paras. [0013], [0016] disclose the viewer/evaluator alternatively closes each eye and the leakage strip 22′ overlays the calibrated chips 14 so as to allow a visual comparison [i.e., user performs visual recognition with the left eye or the right eye can be confirmed] of the leakage/crosstalk to the nearest chip value 16 for each eye independently as shown in figure 2). As per claim 7, Zomet-Lee-Baker disclose the information processing apparatus according to claim 4, wherein the CPU is further configured to: perform an inspection of a stereoscopic display at a specific timing, wherein a crosstalk value is associated with the inspection; and generate, based on the determined one of the closure of the left eye of the user or the closure of the right eye of the user, one of the left-eye image or the right-eye image, wherein each of the left-eve image and the right-eve image includes luminance information associated with the crosstalk value (Zomet: Paras. [0055], [0058], [0066], [0069] disclose a filter for reducing crosstalk is adjusted according to the level of illumination in the images and Baker: Figs. 1a-b & Paras. [0014], [0016] disclose the viewer/evaluator alternatively closes each eye to allow a visual comparison for each eye independently and display gamma and black level of the left and right eye images are adjusted [i.e., the left-eye image or the right-eye image luminance is adjusted] to obtain the best accuracy in estimating the crosstalk level with the L/ R pattern 18, 26 associated with the left and right eye images 10, 20). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Zomet in view of Lee in view of Baker in further view of Wu. As per claim 14, Zomet-Lee disclose the information processing system according to claim 12, wherein the image display apparatus is further configured to display, for the user, the confirmation image, (Zomet: Para. [0056] discloses the user may be presented with a calibration pattern that projected on the autostereoscopic display element.). However, Zomet-Lee do not explicitly disclose “… the confirmation image includes each of a left-eye image of the user and a right-eye image of the user, and the CPU is further configured to generate a guide image that guides the user to a specific position to observe the confirmation image.”. Further, Baker is in the same field of endeavor and teaches the confirmation image includes each of a left-eye image of the user and a right-eye image of the user (Baker: Paras. [0013], [0016] disclose the viewer/evaluator alternatively closes each eye and the leakage strip 22′ to allow a visual comparison of the leakage or ghost image to the nearest chip value 16 for each eye independently as shown in figure 2.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Zomet-Lee and Baker before him or her, to modify the autostereoscopic display system of Zomet-Lee to include the left-eye right-eye image feature as described in Baker. The motivation for doing so would have been to improve evaluation of L/R crosstalk in autostereoscopic display-based systems by providing advanced test pattern configurations to a viewer or evaluator. However, Zomet-Lee-Baker do not explicitly disclose “… and the CPU is further configured to generate a guide image that guides the user to a specific position to observe the confirmation image …”. Further, Wu is in the same field of endeavor and teaches generating a guide image that guides the user to a specific position to observe the confirmation image (Wu: Para. [0019] discloses generating feedback 46, 48, 50 [guide images] based on the viewer’s viewpoint position to guide the viewer for an optimal viewing zone or position). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Zomet-Lee-Baker and Wu before him or her, to modify the autostereoscopic display system of Zomet-Lee-Baker to include the guide image feature as described in Wu. The motivation for doing so would have been to improve viewing experience by providing techniques that aid the viewer by reducing trial-and-error based approaches of the viewer moving to different viewing positions. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and can be viewed in the list of references. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PEET DHILLON whose telephone number is (571)270-5647. The examiner can normally be reached M-F: 5am-1:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V. Perungavoor can be reached at 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PEET DHILLON/Primary Examiner Art Unit: 2488 Date: 12-28-2025
Read full office action

Prosecution Timeline

Apr 02, 2024
Application Filed
Jul 18, 2025
Non-Final Rejection — §103, §112
Oct 16, 2025
Response Filed
Dec 28, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598346
A DISPLAY DEVICE AND OPERATION METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12567263
IMAGING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12548338
OBJECT SAMPLING METHOD AND IMAGE ANALYSIS APPARATUS
2y 5m to grant Granted Feb 10, 2026
Patent 12536812
CAMERA PERCEPTION TECHNIQUES TO DETECT LIGHT SIGNALS OF AN OBJECT FOR DRIVING OPERATION
2y 5m to grant Granted Jan 27, 2026
Patent 12537911
VIDEO PROCESSING APPARATUS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+18.4%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 281 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month