Prosecution Insights
Last updated: April 19, 2026
Application No. 18/727,819

APPARATUS AND METHOD OF PROCESSING IMAGE, DISPLAY DEVICE, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Jul 10, 2024
Examiner
PICON-FELICIANO, ANA J
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
BEIJING BOE TECHNOLOGY DEVELOPMENT CO., LTD.
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
294 granted / 428 resolved
+10.7% vs TC avg
Strong +22% interview lift
Without
With
+21.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
459
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 428 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This Office Action is sent in response to Applicant’s Communication received on July 10,2024 for application 18/727,819. This Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawings, Abstract and Claims. 3. Claims 1-18 and 20-21 are presented for examination. Claims 19 and 22 have been canceled. Information Disclosure Statement 4. The information disclosure statement (IDS) submitted on December 31, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Interpretation 5. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 6. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. 7. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. 8. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: acquisition unit, adjustment unit and output unit in claim 1; first determination unit in claims 2 and 3; second determination unit in claims 4 and 5; adjustment unit in claims 6 and 7; and output unit in claim 8. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 10. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 11. Claims 1, 2, 8, 10-12 and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over HAN et al.(US 2017/0230647 A1)(hereinafter Han) in view of WANG(Machine translation of CN 106254851 B)(hereinafter Wang). Regarding claims 1 and 11, Han discloses an apparatus of processing an image and a method of processing an image [See Han: at least Figs. 1-8 and par. 9-21 regarding multiview image display device, controller and method for processing a multiview image], comprising: an acquisition unit configured to acquire / acquiring multi-view images[See Han: at least Fig. 2A-8 and par. 44-53 regarding the image receiver 110 may receive the image from a variety of external devices such as an external storage medium, a broadcasting station, a web server, and the like. Here, the received image is any one image of a single viewpoint image, a stereo image, and a multiview image. In addition, the multiview image means the 3D video image that provides various viewpoints in several directions to the user by geometrically correcting images photographed by one or more photographing devices and spatially synthesizing the images…The renderer 120 may render a plurality of views having different viewpoints…]; an adjustment unit configured to adjust /adjusting a pixel grayscale value of at least one target pixel in at least one target region in the multi-view images according to a predetermined mapping relationship, so as to obtain adjusted multi-view images[See Han: at least Fig. 2A-8 and par. 44-53, 66-77, 85-88, 91, 95, 100-118 regarding In addition, the image receiver 110 may receive depth information of the image. In general, the depth of the image is a depth value assigned to each of pixels of the image. As an example, the depth of 8 bits may have grayscale values of 0 to 255. For example, on the basis of black and white, the black (a low value) may represent a position which is distant from the viewer and the white (a high value) may represent a position which is close to the viewer. A depth map means a table including the depth information for each of zones of the image. The zone may also be classified in a pixel unit and may also be defined as a predetermined zone greater than the pixel unit. According to one example, the depth map may have a form in which values smaller than 127 or 128 are represented as a negative (−) value and values greater than 127 or 128 are represented as a positive (+) value, based on 127 or 128 of the grayscale values of 0 to 255 as a reference value, that is, 0 (or a focal plane). A reference value of the focal plane may be arbitrarily selected between 0 and 255. Here the negative (−) value means a depression, and the positive (+) value means a protrusion…Further, the multiview image display device 100 may further include a depth adjuster (not shown) that adjusts the depth of the image input based on the depth information according to various references, and in this case, the renderer 120 may render the plurality of views based on the image of which the depth is adjusted by the depth adjuster (not shown)…]; an output unit configured to obtain / obtaining, according to the adjusted multi-view images, a processed image for displaying on the display apparatus, and output / outputting the processed image[See Han: at least Fig. 2A-8 and par. 44-56, 66-77, 85-88, 91, 95, 100-118 regarding In particular, the controller 140 generates the multiview image to be displayed on the display 130 based on the sub-pixel values configuring the plurality of views having different viewpoints rendered by the renderer 120 … the controller 140 may generate the multiview image to be output to the display 130 by mixing at least two sub-pixel values corresponding to views of at least two viewpoints and mapping the mixed sub-pixel value to the specific sub-pixel zone…Further in Fig. 8, a multiview image is generated based on pixel values configuring the plurality of rendered views (S820)… Next, the generated multiview image is displayed (S830).]. Han does not explicitly disclose wherein the predetermined mapping relationship is obtained by measuring a ghosting sensitivity of a display apparatus. However, obtaining a predetermined mapping relationship by measuring a ghosting sensitivity of a display apparatus was well known in the art at the time of the invention was filed as evident from the teaching of Wang [See Wang: at least Fig. 1 and Description section page 2 line 28 through page 3 line 16 regarding A method for reducing crosstalk, multi-view naked-eye 3D display comprises the following steps: the step (A), measuring 3 D display multi-view naked-eye angular brightness distribution YN, N represents a total view number; step (B), the angular brightness distribution determined according to each inter-view light leakage ratio coefficient cm, n, cm, n represents light leakage ratio from viewpoint to viewpoint n, wherein N is more than or equal to 1≤m, 1≤n less than or equal to N; step (C), the measurement of the mapping relation between the brightness and the input gray scale of the display; step (D), according to the mapping relation between the brightness and the input gray scale of the display to each viewpoint image, the grey value of each pixel into a luminance value LN; step (E), calculating all pixel of each viewpoint image includes reducing output luminance LN after crosstalk ' '; step (F), according to the mapping relation between the brightness and the input gray scale of the display to each viewpoint image luminance value of each pixel into a gray scale value, so as to determine the crosstalk reduction after each viewpoint image grey scale of each pixel so as to drive the screen to display…]. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Han with Wang teachings by including “wherein the predetermined mapping relationship is obtained by measuring a ghosting sensitivity of a display apparatus” because this combination has the benefit of providing a mapping relation between the brightness and the input gray scale of the display to each viewpoint image luminance value of each pixel into a gray scale value, so as to determine the crosstalk reduction after each viewpoint image grey scale of each pixel so as to drive the screen to display[See Wang: at least Description section page 2 line 28 through page 3 line 16]. Regarding claims 2 and 12, Han and Wang teach all of the limitations of claims 1 and 11, and are analyzed as previously discussed with respect to those claims. Further on, Han and Wang teaches wherein the multi-view images comprise a first viewpoint image and a second viewpoint image adjacent to the first viewpoint image, the first viewpoint image contains N pixels, the second viewpoint image contains N pixels, and N is an integer greater than 1[See Han: at least Figs. 1-8 and par. 25-26, 40-50, 68-77, 85-87 regarding the multiview image means the 3D video image that provides various viewpoints in several directions to the user by geometrically correcting images photographed by one or more photographing devices and spatially synthesizing the images... when N views having different viewpoints and N depth information corresponding to the N views are input, the renderer 120 may render the multiview image based on at least one image and depth information of the input N views and depth information. Alternatively, when only the N views having different viewpoints are input, the renderer 120 may extract the depth information from the N views and then render the multiview image based on the extracted depth information… Specifically, the controller 140 may generate the multiview image by mapping a mixed pixel value in which a pixel value of a view of a specific viewpoint among the plurality of views having different viewpoints rendered by the renderer 120 and pixel values of views of adjacent viewpoints of the view of the specific viewpoint are mixed, to at least one target pixel zone…]; and wherein the apparatus further comprises a first determination unit configured to: determine / the method further comprises: determining a first region in the first viewpoint image and a second region corresponding to the first region in the second viewpoint image according to difference values between grayscale values of the N pixels of the first viewpoint image and grayscale values of the N pixels of the second viewpoint image, and determine the first region and the second region as the at least one target region[See Han: at least Figs. 1-8 and par. 25-26, 40-50, 68-77, 85-87 regarding the controller 140 may generate the multiview image by mapping a mixed pixel value in which a pixel value of a view of a specific viewpoint among the plurality of views having different viewpoints rendered by the renderer 120 and pixel values of views of adjacent viewpoints of the view of the specific viewpoint are mixed, to at least one target pixel zone…the controller 140 may generate the multiview image by setting the source pixel zone in each of a plurality of adjacent frames which are temporally adjacent to each other and mapping the source pixel zone to at least one target pixel zone. For example, the controller 140 may calculate the mixed pixel value by applying the predetermined weight to the 3D zone which is set by the above-mentioned method in each of the plurality of adjacent frames which are temporally adjacent to each other. That is, the controller 140 may calculate the mixed pixel value in consideration of pixel values of other adjacent frames such as a previous frame, a next frame, and the like, rather than simply based on a current frame, and the source pixel zone may be set according to various methods described above. In addition, the number of frames on which the calculation of the mixed pixel value is based may be variously considered… See Wang: at least Fig. 1 and Description section page 2 line 28 through page 3 line 16 regarding A method for reducing crosstalk, multi-view naked-eye 3D display comprises the following steps: the step (A), measuring 3 D display multi-view naked-eye angular brightness distribution YN, N represents a total view number; step (B), the angular brightness distribution determined according to each inter-view light leakage ratio coefficient cm, n, cm, n represents light leakage ratio from viewpoint to viewpoint n, wherein N is more than or equal to 1≤m, 1≤n less than or equal to N; step (C), the measurement of the mapping relation between the brightness and the input gray scale of the display; step (D), according to the mapping relation between the brightness and the input gray scale of the display to each viewpoint image, the grey value of each pixel into a luminance value LN; step (E), calculating all pixel of each viewpoint image includes reducing output luminance LN after crosstalk ' '; step (F), according to the mapping relation between the brightness and the input gray scale of the display to each viewpoint image luminance value of each pixel into a gray scale value, so as to determine the crosstalk reduction after each viewpoint image grey scale of each pixel so as to drive the screen to display…]. Regarding claim 8, Han and Wang teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, Han teaches or suggests wherein the output unit being configured to obtain, according to the adjusted multi-view images, a processed image for displaying on the display apparatus comprises: concatenating the adjusted multi-view images to obtain the processed image for displaying on the display apparatus[See Han: at least Fig. 2A-8 and par. 44-56, 66-77, 85-88, 90-95, 100-118 regarding In particular, the controller 140 generates the multiview image to be displayed on the display 130 based on the sub-pixel values configuring the plurality of views having different viewpoints rendered by the renderer 120 … the controller 140 may generate the multiview image to be output to the display 130 by mixing at least two sub-pixel values corresponding to views of at least two viewpoints and mapping the mixed sub-pixel value to the specific sub-pixel zone…as illustrated in FIG. 4A, a pixel value in which the pixel value of the view 410 of the viewpoint 1 as well as pixel values of a view 420 of a viewpoint 2 and a view 430 of a viewpoint 7, which are views of adjacent viewpoints, are mixed may be mapped to the first R sub-pixel 441 of the output image 440. That is, a pixel value in which a first R sub-pixel value 411 of the view 410 of the viewpoint 1, a first R sub-pixel value 421 of the view 420 of the viewpoint 2, and a first R sub-pixel value 431 of the view 430 of the viewpoint 7 are mixed according to a predetermined weight is mapped to the first R sub-pixel 441.…Further in Fig. 8, a multiview image is generated based on pixel values configuring the plurality of rendered views (S820)… Next, the generated multiview image is displayed (S830).]. Regarding claim 10, Han and Wang teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, when combined with Wang teachings, Han teaches a display device[See Han: at least Fig. 2A-2B and par. 44, 54-55, 62 regarding display 130.], comprising: the apparatus of processing the image according to claim 1[See Han: at least at least Fig. 2A and par. 54, 66-67 regarding The controller 140 may control an overall operation of the display device 100.]; and a display apparatus configured to display an output image from the apparatus of processing the image[See Han: at least Fig. 2A-8 and par. 44-56, 66-77, 85-88, 91, 95, 100-118 regarding In particular, the controller 140 generates the multiview image to be displayed on the display 130 based on the sub-pixel values configuring the plurality of views having different viewpoints rendered by the renderer 120 …]. Regarding claim 20, Han and Wang teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, when combined with Wang teachings, Han teaches an electronic device[See Han: at least par. 45 regarding The multiview image display device 100 may be implemented as various kinds of display devices such as a TV, a monitor, a PC, a kiosk, a tablet PC, an electronic frame, a cell phone, and the like…], comprising: at least one processor; and a memory for storing instructions executable by the at least one processor, wherein the instructions are configured to, when executed by the at least one processor, cause the at least one processor to implement the method of claim 1[See Han: at least 117-119 regarding As an example, a non-transitory computer readable medium having a program stored thereon may be provided, in which the program performs an operation of rendering a plurality of views having different viewpoints… Specifically, various applications or programs described above may be stored and provided in the non-transitory computer readable medium such as a compact disc (CD), a digital versatile disk (DVD), a hard disk, a Blu-ray disk, a universal serial bus (USB), a memory card, a read-only memory (ROM), or the like…]. Regarding claim 21, Han and Wang teach all of the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Further on, when combined with Wang teachings, Han teaches a non-transitory computer-readable storage medium having computer instructions therein, wherein the computer instructions are configured to cause a computer to implement the method of claim 1[See Han: at least 117-119 regarding As an example, a non-transitory computer readable medium having a program stored thereon may be provided, in which the program performs an operation of rendering a plurality of views having different viewpoints… Specifically, various applications or programs described above may be stored and provided in the non-transitory computer readable medium such as a compact disc (CD), a digital versatile disk (DVD), a hard disk, a Blu-ray disk, a universal serial bus (USB), a memory card, a read-only memory (ROM), or the like…].. Allowable Subject Matter 9. Claims 3-7, 9 and 13-18 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANA J PICON-FELICIANO whose telephone number is (571)272-5252. The examiner can normally be reached Monday-Friday 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Kelley can be reached at 571 272 7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Ana Picon-Feliciano/Examiner, Art Unit 2482 /CHRISTOPHER S KELLEY/Supervisory Patent Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Jul 10, 2024
Application Filed
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598287
DISPLAY DEVICE, METHOD, COMPUTER PROGRAM CODE, AND APPARATUS FOR PROVIDING A CORRECTION MAP FOR A DISPLAY DEVICE, METHOD AND COMPUTER PROGRAM CODE FOR OPERATING A DISPLAY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593021
ELECTRONIC APPARATUS AND METHOD FOR CONTROLLING THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12567163
IMAGING SYSTEM AND OBJECT DEPTH ESTIMATION METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12561788
FLUORESCENCE MICROSCOPY METROLOGY SYSTEM AND METHOD OF OPERATING FLUORESCENCE MICROSCOPY METROLOGY SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12554122
TECHNIQUES FOR PRODUCING IMAGERY IN A VISUAL EFFECTS SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
90%
With Interview (+21.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 428 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month