Prosecution Insights
Last updated: April 19, 2026
Application No. 19/021,692

ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF

Non-Final OA §102§103§112
Filed
Jan 15, 2025
Examiner
NAVAS JR, EDEMIO
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
384 granted / 540 resolved
+13.1% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
31 currently pending
Career history
571
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
23.5%
-16.5% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 540 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . TITLE The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. See MPEP 606. Information Disclosure Statement The information disclosure statements (IDS) submitted on 01/15/2025 and 06/10/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Objections Claim 7 objected to because of the following informalities: grammatical error. The claim states “while the the display is being operated in the 3D mode,” which is believed by the examiner to merely be a simple mistake/typo and will as such be interpreted as “while the display is being operated in the 3D mode”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 2, 5, 8, 12, 16 and 19 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. Regarding claim 2, limitation “identify a probability that content to be displayed is stereoscopic content for each content duration, control the display to be operated in the 3D mode in a content duration in which the identified probability of the stereoscopic content is a threshold value or more, and control the display to be operated in the 2D mode in a content duration in which the identified probability of the stereoscopic content is less than the threshold value” is not fully supported by the specification as required by MPEP 2161.01(I). The specification fails to provide adequate support for a processor performing the specialized function of “identifying a probability of the stereoscopic content is a threshold value”; ¶0173 of the instant application defines the invention in functional language specifying a desired result but does not sufficiently identify how the inventor has devised the function to be performed of result achieved. That is, ¶0173 of the originally filed specification states that a first threshold value may be changed based on the user context, but that the user context may include a user profile, viewing environment information, or the like, and uses an example where the threshold value may be “relatively low in a viewing environment where the user may easily recognize the 3D content”, as well as another example wherein the “first threshold value may be relatively high for the user who has difficulty in recognizing the 3D content (e.g., older user or user with poor eyesight)”. First off, the value of this threshold is relative, and in such a case, what does “relatively low” mean? Is 50% relatively low? Is it 20%? Is it even a percentage, or is it a value range from 0-20? 0 to 4? Secondly, in the first given example, how does an environment allow the user to “easily recognize the 3D content”? This is undefined by the specification, i.e. the exact identifiers of such an environment is not precisely described, and as such how could a threshold be defined by one of ordinary skill if even the qualifiers of an environment for the threshold are undefined. With reference to the second given example given by the specification, what is considered “poor eyesight” and what is considered “older user”? These themselves may be additional thresholds which are not properly described and defined by the instant application. Regarding claim 5, the claim states “identify whether similarity between a left region of the side-by-side content and a right region of the side-by-side content is a threshold value for each side-by-side content duration” is similarly not fully supported by the specification as required by MPEP 2161.01(I). Within the originally filed specification as described in ¶0192-0193, a second threshold value is cited as being a fixed value or a changeable value and may be set at the manufacturing time or a value set by the user. Then in ¶0196 the similarity between the left region and the right region of the content is identified by converting the average value of the difference values of the respective pixels into a value for comparison with the second threshold value. The examiner notes that defining a similarity between the left and right regions may be indeed be defined by averaging the difference values between left and right regions, however once again the threshold value is left undefined and cannot be appropriately ascertained. As far as the examiner can surmise, the content merely being 3D content is sufficient to pass a threshold, and the content being 2D is insufficient. Claims 5, 8, 16 and 19 will be interpreted in this manner for the purposes of compact prosecution. However, as regards to HOW the threshold is ascertained by the system, and what is involved in such calculations, the specification does not fully provide support. It should be noted that the written description requirement under 35 USC 112(a) is not satisfied by stating that one of ordinary skill in the art could devise an algorithm to perform a specialized programmed function. For written description, the specification as filed must describe the claimed invention in sufficient detail so that one of ordinary skill in the art can reasonably conclude that the inventor has possession of the claimed invention. An original claim may lack written description when the claim defines the invention in functional language specifying a desired result but the specification does not sufficiently identify how the inventor has devised the function to be performed or result achieved. For software, this can occur when the algorithm or process/steps for performing the computer function are not explained at all or are not explained in sufficient detail (simply restating the function recited in the claim is not necessarily sufficient.) See MPEP 2161.01(I). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 6, 7, 9, 11, 15 and 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Huang et al. (“Huang”) (U.S. PG Publication No. 2022/0103805). In regards to claim 1, Huang teaches an electronic apparatus comprising: a display configured to be operated in a three-dimensional (3D) mode and a two-dimensional (2D) mode (See ¶0007 and 0010 in view of FIG. 5-7 wherein a 3D display system captures an image of a user and determines their face and sight direction in order to determine whether to display portions of the display in 3D or 2D); at least one camera to capture an image in front of the display (See ¶0007 and 0010 in view of FIG. 3-5); a memory storing at least one instruction (See ¶0041); and at least one processor configured to execute the at least one instruction to (See ¶0040-0041): identify whether a user is positioned in front of the display based on the captured image, when the user is identified as being positioned in front of the display, identify whether a user gaze is directed toward a front of the display, when the user gaze is identified as being directed toward the front of the display, control the display to be operated in the 3D mode, and when the user is not identified as being positioned in front of the display, or when the user gaze is identified as not being directed toward the front of the display, control the display to be operated in the 2D mode (See at least ¶0007 and 0010 in view of FIG. 1 and 3-7 wherein the user is imaged by the camera to determine their position, facial direction and gaze, accordingly the portions of the display which are indeed being watched/gazed by the user are to be set in 3D mode, while areas of the user gaze which are not directed towards the front of the display are to be operated in 2D mode). In regards to claim 6, Huang teaches the apparatus as claimed in claim 1, wherein the at least one processor is configured to execute the at least one instruction to, while the display is being operated in the 3D mode: when the user is not identified as being positioned in front of the display, or when the user gaze is identified as not being directed toward the front of the display, control the display to switch to the 2D mode (See FIG. 5-7). In regards to claim 7, Huang teaches the apparatus as claimed in claim 1, wherein the at least one processor is configured to execute the at least one instruction to, while the the display is being operated in the 3D mode: when the user gaze is identified as being directed toward the front of the display, identify whether content to be displayed is stereoscopic content for each content duration, and control the display to be switched to the 3D mode when the content to be displayed is identified as the stereoscopic content (See FIG. 5-7 in view of ¶0026 which describes that the image stream may be of 3D rendering, and as such would be rendered in 3D given the conditions as per FIG. 5-7 previously described). In regards to claim 9, Huang teaches the apparatus as claimed in claim 1, wherein the at least one processor is configured to execute the at least one instruction to: identify the user as being positioned in front of the display when a specific body part of the user is included in the captured image or a user body region that has a predetermined ratio or more is identified as being included in the captured image (See FIG. 1 and 3-7). In regards to claim 11, the claim is rejected under the same basis as claim 1 by Huang. In regards to claims 15 and 20 the claims are rejected under the same basis as claims 1 and 6, respectively, by Huang, wherein the non-transitory computer readable medium is taught as seen in ¶0041. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 5, 8, 12, 16 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al. (“Huang”) (U.S. PG Publication No. 2022/0103805) in view of Zhou et al. (“Zhou”) (U.S. PG Publication No. 2014/0313286). In regards to claim 2, Huang teach the apparatus as claimed in claim 1, wherein the at least one processor is configured to execute the at least one instruction to: when the user gaze is identified as being directed toward the front of the display, identify a probability that content to be displayed is stereoscopic content for each content duration, control the display to be operated in the 3D mode in a content duration in which the identified probability of the stereoscopic content is a threshold value or more, and control the display to be operated in the 2D mode in a content duration in which the identified probability of the stereoscopic content is less than the threshold value. In a similar endeavor Zhou teaches when the user gaze is identified as being directed toward the front of the display, identify a probability that content to be displayed is stereoscopic content for each content duration, control the display to be operated in the 3D mode in a content duration in which the identified probability of the stereoscopic content is a threshold value or more (See ¶0018 and 0048 of Zhou wherein the system may determine whether the video content is in 2D format or 3D format, and will display accordingly, thus identifying the probability of the stereoscopic content surpassing a threshold [i.e., whether it is 3D or not], this is taken in view of Huang’s teachings which considers the user’s gaze for content display as seen in at least FIG. 5-7 and described above), and control the display to be operated in the 2D mode in a content duration in which the identified probability of the stereoscopic content is less than the threshold value (See ¶0018 and 0048 of Zhou in view of FIG. 5-7 of Huang as described above). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Zhou into Huang because it allows for consideration of the image format as the ideal way to display images by the performance of a streamlined operation process as described in ¶0018, thus providing an efficient image display determination process. In regards to claim 5, Huang fails to teach the apparatus as claimed in claim 1, wherein content to be displayed includes side-by-side content, and the at least one processor is configured to execute the at least one instruction to: identify whether similarity between a left region of the side-by-side content and a right region of the side-by-side content is a threshold value for each side-by-side content duration, control the display to be operated in the 3D mode in the side-by-side content duration in which the similarity between the left region of the side-by-side content and the right region of the side-by-side content is the threshold value or more, and control the display to be operated in the 2D mode in the side-by-side content duration in which the similarity between the left region of the side-by-side and the right region of the side-by-side content is less than the threshold value. In a similar endeavor Zhou teaches wherein content to be displayed includes side-by-side content (See ¶0033), and the at least one processor is configured to execute the at least one instruction to: identify whether similarity between a left region of the side-by-side content and a right region of the side-by-side content is a threshold value for each side-by-side content duration, control the display to be operated in the 3D mode in the side-by-side content duration in which the similarity between the left region of the side-by-side content and the right region of the side-by-side content is the threshold value or more (See ¶0018 and 0048 of Zhou wherein the system may determine whether the video content is in 2D format or 3D format, and will display accordingly, thus identifying whether the threshold has been surpassed [i.e., by determining whether it is 3D or not], this is taken in view of Huang’s teachings which considers the user’s gaze for content display as seen in at least FIG. 5-7 and described above), and control the display to be operated in the 2D mode in the side-by-side content duration in which the similarity between the left region of the side-by-side and the right region of the side-by-side content is less than the threshold value (See ¶0018 and 0048 of Zhou in view of FIG. 5-7 of Huang as described above). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Zhou into Huang because it allows for consideration of the image format as the ideal way to display images by the performance of a streamlined operation process as described in ¶0018, thus providing an efficient image display determination process. In regards to claim 8, Huang fails to teach the apparatus as claimed in claim 1, wherein the at least one processor is configured to execute the at least one instruction to: identify a probability that content to be displayed is stereoscopic content for each content duration, identify whether the user gaze is directed toward the front of the display based on the captured image in the content duration in which the identified probability of the stereoscopic content is a threshold value or more, control the display to be operated in the 3D mode when the user gaze is identified as being directed toward the front of the display and the identified probability of the stereoscopic content is the threshold value or more, and control the display to be operated in the 2D mode in the content duration in which the identified probability of the stereoscopic content is less than the threshold value. In a similar endeavor Zhou teaches identify a probability that content to be displayed is stereoscopic content for each content duration, identify whether the user gaze is directed toward the front of the display based on the captured image in the content duration in which the identified probability of the stereoscopic content is a threshold value or more, control the display to be operated in the 3D mode when the user gaze is identified as being directed toward the front of the display and the identified probability of the stereoscopic content is the threshold value or more (See ¶0018 and 0048 of Zhou wherein the system may determine whether the video content is in 2D format or 3D format, and will display accordingly, thus identifying whether the threshold has been surpassed [i.e., by determining whether it is 3D or not], this is taken in view of Huang’s teachings which considers the user’s gaze for content display as seen in at least FIG. 5-7 and described above), and control the display to be operated in the 2D mode in the content duration in which the identified probability of the stereoscopic content is less than the threshold value (See ¶0018 and 0048 of Zhou in view of FIG. 5-7 of Huang as described above). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Zhou into Huang because it allows for consideration of the image format as the ideal way to display images by the performance of a streamlined operation process as described in ¶0018, thus providing an efficient image display determination process. In regards to claim 12, the claim is rejected under the same basis as claim 2 by Huang in view of Zhou. In regards to claims 16 and 19 the claims are rejected under the same basis as claims 2 and 5, respectively, by Huang in view of Zhou. Claim(s) 3, 13 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al. (“Huang”) (U.S. PG Publication No. 2022/0103805) in view of Zhou et al. (“Zhou”) (U.S. PG Publication No. 2014/0313286) and Sibley et al. (“Sibley”) (U.S. PG Publication No. 2022/0117211). In regards to claim 3, Huang fails to teach the apparatus as claimed in claim 2, wherein the at least one processor is configured to execute the at least one instruction to: input the content to be displayed into a trained artificial intelligence model trained for each content duration, and identify the probability that the content to be displayed is the stereoscopic content based on information output from the trained artificial intelligence model. In a similar endeavor Sibley teaches wherein the at least one processor is configured to execute the at least one instruction to: input the content to be displayed into a trained artificial intelligence model trained for each content duration, and identify the probability that the content to be displayed is the stereoscopic content based on information output from the trained artificial intelligence model (See ¶0075). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Sibley into Huang because it allows for the system to compare and correspond features or portions of one or more images in order to facilitate detection, identification and treatment of objects within 3D and 3D images as described in ¶0075. In regards to claim 13, the claim is rejected under the same basis as claim 3 by Huang in view of Zhou and Sibley. In regards to claim 17, the claim is rejected under the same basis as claim 3 by Huang in view of Zhou and Sibley. Claim(s) 4, 14 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al. (“Huang”) (U.S. PG Publication No. 2022/0103805) in view of Zhou et al. (“Zhou”) (U.S. PG Publication No. 2014/0313286) and Kim et al. (“Kim”) (U.S. PG Publication No. 2021/0132688). In regards to claim 4, Huang fails to teach the apparatus as claimed in claim 2, wherein the at least one processor is configured to execute the at least one instruction to: input the captured image into a trained artificial intelligence model, and identify whether the user gaze is directed toward the front of the display based on information output from the trained artificial intelligence model. In a similar endeavor Kim teaches wherein the at least one processor is configured to execute the at least one instruction to: input the captured image into a trained artificial intelligence model, and identify whether the user gaze is directed toward the front of the display based on information output from the trained artificial intelligence model (See ¶0057-0058 in view of FIG. 4 and 5A of Kim wherein images of the user are captured, from here the gaze information is used as input into a neural network to identify where the user is viewing, this is taken in view of Huang’s teachings of FIG. 5-7). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Kim into Huang because it allows for the improvement of video quality in targeted areas by which are being viewed by the user, considered as attention regions, as well as regions outside of the attention region receiving less processing power as described in at least ¶0057, thus increasing overall efficiency and lowering processing requirements of the overall display image. In regards to claim 14, the claim is rejected under the same basis as claim 4 by Huang in view of Zhou and Kim. In regards to claim 18, the claim is rejected under the same basis as claim 4 by Huang in view of Zhou and Kim. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al. (“Huang”) (U.S. PG Publication No. 2022/0103805) in view of Koo (U.S. PG Publication No. 2021/0373702). In regards to claim 10, Huang fails to teach the apparatus as claimed in claim 1, wherein the display is implemented as a light field display including a lenticular lens array, and the at least one processor is configured to control the display to be operated either in the 3D mode or the 2D mode by adjusting a voltage applied to the lenticular lens array included in the light field display. In a similar endeavor Koo teaches wherein the display is implemented as a light field display including a lenticular lens array, and the at least one processor is configured to control the display to be operated either in the 3D mode or the 2D mode by adjusting a voltage applied to the lenticular lens array included in the light field display (See ¶0117-0119 and 0159). It would have been obvious to a person of ordinary skill in the art, and before the effective filing date of the claimed invention, to incorporate the teaching of Koo into Huang because it allows for the switching of 3D and 2D displaying by simply applying an appropriate voltage which changes an effective refractive index as described in at least ¶0159. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDEMIO NAVAS JR whose telephone number is (571)270-1067. The examiner can normally be reached M-F, ~ 9 AM -6 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at 5712727383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. EDEMIO NAVAS JR Primary Examiner Art Unit 2483 /EDEMIO NAVAS JR/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Jan 15, 2025
Application Filed
Jan 06, 2026
Non-Final Rejection — §102, §103, §112
Mar 24, 2026
Applicant Interview (Telephonic)
Mar 27, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598398
Terminal Detection Platform
2y 5m to grant Granted Apr 07, 2026
Patent 12598283
METHOD AND DISPLAY APPARATUS FOR CORRECTING DISTORTION CAUSED BY LENTICULAR LENS
2y 5m to grant Granted Apr 07, 2026
Patent 12593141
INFORMATION MANAGEMENT DEVICE, INFORMATION MANAGEMENT METHOD, AND STORAGE MEDIUM FOR MANAGING INFORMATION PROVIDED TO A MOBILE OBJECT AND DEVICE USED BY A USER IN LOCATION DIFFERENT FROM THE MOBILE OBJECT
2y 5m to grant Granted Mar 31, 2026
Patent 12587686
SIGNALING FOR GENERAL CONSTRAINT INFORMATION IN VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12587643
IMAGE ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM IN WHICH BITSTREAM IS STORED FOR BLOCK DIVISION AT PICTURE BOUNDARY
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
96%
With Interview (+24.7%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 540 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month