DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments filed on 01/27/2026 have been considered but they are not persuasive.
However, examiner found some amended limitations are taught by references previous introduced.
In Remark page 3, lines 23-26, page 4, lines 1-5, applicant argued that “The blender is the main component in the DPU to blend (merge or mix) multiple layers…Blender is also hardware. Each blend core requires a series of multiplication and addition operations, thereby occupying certain area. Based on the above explanation, the blend cores are hardware. However, the "composing layers" of Hu are not hardware, so blend cores are not equivalent to the "composing layers". In Hu, the hardware is the frame buffer and multimedia display processor, but the quantity of either is one. Hu does not disclose about multiple hardware components”.
The examiner respectfully disagrees with Applicant’s argument. In response to applicant's argument that the references fail to show certain features of applicant’s invention, it is noted that the features upon which applicant relies (i.e. Blender is also hardware. The blender comprises multiple blend cores. Each blend core requires a series of multiplication and addition operations. Based on the above explanation, the blend cores are hardware) are not recited in the rejected claims 1, and 7. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). However, based on the claim 1 recites “the display processor unit comprises multiple blend cores”, in paragraph [0043], Hu discloses “sending the first composed layer to a frame buffer for caching and displaying a composition result through a display screen includes: composing the layers with the second attribute into a pre-composite layer by using the multimedia display processor” and [0023] “the multimedia display processor (MDP) to compose the layers, and finally form a buffer in bufferqueue, and then use images composed in the buffer for display under an action of display driving” and [0026] “the plurality of layers can also be composed through a hardware composition mechanism of the multimedia display processor (MDP)” Hu teaches a multimedia display processor (MDP) (referred to as a blender) is a hardware, composes the layers, is formed in a buffer in bufferequeue (referred to as a blend core in a queue of blend cores) for displaying a composition result through a display screen.
Based on above explanation. Hu teaches the display processor unit (MDP) comprises multiple blend cores (multiple buffers). The blend cores are hardware. The display processor unit (blender) composes (blends) multiple layers to display a composition result through a display screen.
In Remark page 4, lines 18-20, applicant argued that the "determination module 520" is not equivalent to the claimed "layer selector" because the "determination module" is not corresponding one-to one with the blend core.
The examiner respectfully disagrees with Applicant’s argument. In fact, in paragraph [0122], Hu discloses “the determination module 520 is also configured to acquire image data corresponding to each of the plurality of layers, determine that the image data corresponding to the layer meets the preset condition in response to the image data corresponding to the layer being greater than the preset data value” Hu teaches the determination module 520 selects (acquires) (referred to as a layer selector) corresponding to each of the plurality of layers, determine that the image data corresponding to the layer meets the preset condition, the layer being greater than the preset data value.
Independent claims 7 has been recited similarly to claim 1 and are rejected as the explanation above.
Dependent claims 2,6, 12-14,18 depend on independent claims 1and 7 and rejected as current rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 7-8, 13 are rejected under 35 U.S.C. 103 as being unpatentable by Hu et al. (U.S. 2022/0139352 A1) in view of Trandafir et al. (U.S. 2019/0043248 A1).
Regarding Claim 1, Hu discloses a method for processing display data (Hu, [0006] “an image composition method”), wherein the method is applied to a display processor unit (Hu, [0006] “a multimedia display processor”), and the display processor unit comprises multiple blend cores (Hu, [0006] “composing layers”)
; the method comprises:
obtaining a pixel (Hu, [0024] “Each of the plurality of layers is composed of a plurality of pixels” Hu teaches obtaining pixels in each layer;
searching for a to-be-blended source layer among multiple source layers, wherein the to-be- blended source layer comprises pixels to be displayed (Hu, [0024] “Each of the plurality of layers is composed of a plurality of pixels” and [0028] “determining a layer attribute of each of the plurality of layers; composing a layer with a first attribute into a first composed layer by the graphics processing unit (GPU), and displaying a composition result through the display screen” Hu teaches searching (determining) a layer among the plurality of layers which is blended (a first composed layer) including pixels and displayed on the screen; and
assigning the to-be-blended source layer to different blend cores, wherein the to-be-blended source layer corresponds one-to-one with the blend cores, wherein each of the blend cores blends an assigned to-be-blended source layer to generate display data (Hu, [0043] “sending the first composed layer to a frame buffer for caching and displaying a composition result through a display screen includes: composing the layers with the second attribute into a pre-composite layer by using the multimedia display processor” Hu teaches assigning the blended source layer (the first composed layer) to a different frame buffer (referred to as a blend core) to blend with a layer (one to one with the blend core, a layer with a second attribute) and generate display data (displaying a composition result).
Note: A buffer is a temporary area where data is stored in the main memory (RAM) or disk. Memory is the electronic storage space such computer chips, is a hardware.
However, Hu does not explicitly teach obtaining a pixel coordinate of a target pixel point;
pixels to be displayed on a display axis, and the display axis is a coordinate axis of the pixel coordinate in a pixel coordinate system;
Trandafir teaches obtaining a pixel coordinate of a target pixel point (Trandafir, [0004] “Blending is performed to combine multiple graphical surfaces, e.g. blend multiple picture/pixel rectangles, in order to form a single image for a display” and [0023] “where each pipeline of the parallel pipelines fetch pixels from a source surface (which are illustrated as pixel rectangles), the software algorithm 135 performed by the one or more processor(s) 130 takes N rectangles, e.g. given by (x1, y1, x2, y2)” and [0053] “FIG. 5, a pictorial example 500 of a 2-rectangle overlap arrangement for a pixel blending operation” Trandafir teaches obtaining a pixel coordinate (x1, y1, x2, y2) a target pixel point (P1, P2, P3, P4, Fig. 5) for blending.
pixels to be displayed on a display axis, and the display axis is a coordinate axis of the pixel coordinate in a pixel coordinate system (Trandafir, [0063] “FIG. 6, a further pictorial example of a 2-rectangle overlap scenario for a pixel blending operation. Here, in 610: Sx={R1, R2} as there is an overlap on the ‘X’ axis. It follows that Sx ∩ Sy=∅, as R1 and R2 don't overlap” Trandafir teaches pixels are displayed on a coordinate of a pixel coordinate system (Fig. 6).
Hu and Trandafir are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Hu to combine with obtaining a pixel coordinate of a target pixel point (as taught Trandafir) in order to obtain a pixel coordinate of a target pixel point because Trandafir can provide obtaining a pixel coordinate (x1, y1, x2, y2) a target pixel point (P1, P2, P3, P4, Fig. 5) for blending (Trandafir, [0004], [0023], Fig. 5, [0053]). Doing so, it may output the pixel-correct blending result, irrespective of the number of input image data layers (Trandafir, [0019]).
Regarding Claim 2, Hu discloses the method according to claim 1, wherein the display processor unit further comprises multiple layer selectors, and each layer selector corresponds one-to-one with one blend core (Hu, Fig. 8, [0120] “The determination module 520 is configured to determine a layer attribute of each of the plurality of layers. The layer attribute includes a first attribute and a second attribute” and [0122] the determination module 520 is also configured to acquire image data corresponding to each of the plurality of layers, determine that the image data corresponding to the layer meets the preset condition in response to the image data corresponding to the layer being greater than the preset data value Hu teaches a determination module (520) (referred to as a layer selector) to select (acquire) a layer of plurality of layers for blending (composing one to one) corresponding to the layer meets the preset condition;
However, Hu does not explicitly teach
the source layers are provided with a predefined first number; and the step of searching for a to-be-blended source layer among multiple source layers comprises that: each of the layer selectors obtains the source layers one by one according to an order of the first number, and determines whether obtained source layers are the to-be-blended source layer;
each of the layer selectors renumbers the to-be-blended source layer based on an order of acquisition of the to-be-blended source layer, and a second number is generated; and each of the layer selectors searches for an assigned source layer of the layer selector, wherein the assigned source layer is the to-be-blended source layer whose second number is the same as a blend core number corresponding to the layer selector, and the assigned source layer is used to be assigned to a blend core corresponding to each layer selector.
Trandafir teaches the source layers are provided with a predefined first number; and the step of searching for a to-be-blended source layer among multiple source layers comprises that: each of the layer selectors obtains the source layers one by one according to an order of the first number and determines whether obtained source layers are the to-be-blended source layer (Trandafir, Figs 1, 2, [0030] “the layer selection module 170 is arranged to configure the DCU 140 to generate composite pixel data for said pixel based on the selected subset N of active layers, the memory interface component 210 of the DCU 140 with address information for the selected subset N of active layers, etc., stored in the selection registers 272” Trandafir teaches the source layers are provided with a predefined first number (1 - n numbers, top n layers 272, Fig. 2) by a layer selector and determined a source layers are to be blended (the subset N of active layers, stored in the selection register 272);
each of the layer selectors renumbers the to-be-blended source layer based on an order of acquisition of the to-be-blended source layer, and a second number is generated; and each of the layer selectors searches for an assigned source layer of the layer selector, wherein the assigned source layer is the to-be-blended source layer whose second number is the same as a blend core number corresponding to the layer selector, and the assigned source layer is used to be assigned to a blend core corresponding to each layer selector (Trandafir, Figs 1, 2, [0026] “For each pixel, the layer selection module 170 is arranged to select up to n layers from which pixel data is to be blended in order to generate composite pixel data for the respective pixel, and to configure the memory interface component 210 of the DCU 140 to fetch the relevant pixel data for the selected (up to) n layers” and [0029] “, the subset N of active layers may be selected based on a predefined order of the layer descriptors 280 within the descriptor register set 180… the layer selector 270 may be arranged to sequentially read layer descriptor information 280 from the layer descriptor register set 180 in priority order and select the first n graphics layers identified” Trandafir teaches the layer selector selects the to-be-blended source layer (the active layers) based on a predefined order in priority order e.g., select the first n graphics layers identified for blending core number.
Hu and Trandafir are combinable see rationale in claim 1.
Regarding Claim 7, a combination of Hu and Trandafir discloses a display processor unit (Hu, [0006] “a multimedia display processor”), wherein the display processor unit comprises multiple layer selectors (Trandafir, [0030] “the layer selection module 170” and multiple blend cores (Hu, [0043] “sending the first composed layer to a frame buffer for caching” Hu teaches a blend core (referred to as a frame buffer), wherein
multiple layer selectors are configured for obtaining a pixel coordinate of a target pixel point;
searching for a to-be-blended source layer among multiple source layers, wherein the to-be-blended source layer comprises pixels to be displayed on a display axis, and the display axis is a coordinate axis of the pixel coordinate in a pixel coordinate system;
assigning the to- be-blended source layer to different blend cores, wherein the to-be-blended source layer corresponds one-to-one with the blend cores; and
each blend core is configured for blending an assigned to-be-blended source layer to generate display data.
Claim 7 is substantially similar to claim 1 is rejected based on similar analyses.
Regarding Claim 8, a combination of Hu and Trandafir discloses the display processor unit according to claim 7, wherein each layer selector corresponds one-to-one with one blend core; the source layers are provided with a predefined first number; and
each of the layer selectors is configured for:
obtaining the source layers one by one in an order of the first number, determining whether an obtained source layer is the to-be-blended source layer,
renumbering the to-be-blended source layer based on an order of acquisition of the to-be-blended source layer, and then generating a second number; and
searching for an assigned source layer of the layer selector, wherein the assigned source layer is the to-be- blended source layer whose second number is the same as a blend core number corresponding to the layer selector, and the assigned source layer is used to be assigned to a blend core corresponding to each layer selector.
Claim 8 is substantially similar to claim 2 is rejected based on similar analyses.
Regarding Claim 13, Hu discloses an electronic device (Hu, [0002] “an electronic device”), wherein the electronic device comprises the display processor unit (Hu, [0006] “a multimedia display processor”) according to claim 7.
Claims 6, 12, 14, 18 are rejected under 35 U.S.C. 103 as being unpatentable by Hu et al. (U.S. 2022/0139352 A1) in view of Trandafir et al. (U.S. 2019/0043248 A1) and further in view of Carlsen et al. (U.S. 6466210 B1).
Regarding Claim 6, the method according to claim 1, a combination of Hu and Trandafir does not explicitly teach wherein a number of the blend cores represents a hierarchy of the blend cores; and the step of each blend core performs blending the assigned to-be-blended source layer to generate display data comprises:
blending, through a top-level blend core, the assigned to-be-blended source layer with a predefined background color; and
blending, through other blend cores, the assigned to-be-blended source layer with blending data output by a blend core of a previous level, wherein the other blend cores are any blend core except the top-level blend core, and the display data are blending data output by a blend core of a last level.
However, Carlsen teaches a number of the blend cores represents a hierarchy of the blend cores; and the step of each blend core performs blending the assigned to-be-blended source layer to generate display data (Carlsen, Fig. 6A, 6B, Col. 2 lines 42-44 “providing layer blending information defining a hierarchy for blending layers of the image” and Col. 2 lines 52-61 “receiving object data associated with a first layer of an image to be displayed; storing the intermediate form data in a first buffer; receiving object data associated with a second layer of the image; blending intermediate form data for the first and second layer to derive blended data; and printing the image” and Col. 10, lines 66-67, Col. 11 lines 1-2 “blending processor 58 blends the layers according to the stack order two layers at a time until all the layers are blended (316) and then passes the resultant data to the frame buffer (318)” Carlsen teaches a blend core is performed by two layers and stored in a layer buffer and presents a hierarchy of the blend cores (referred to as a hierarchy for blending layers of the image) and the steps of blending the first layer and the second layer to generate display data.
blending, through a top-level blend core, the assigned to-be-blended source layer with a predefined background color (Carlsen, Col. 10, lines 66-67, Col. 11 lines 1-2 “blending processor 58 blends the layers according to the stack order two layers at a time until all the layers are blended (316) and then passes the resultant data to the frame buffer (318)” and Col. 3 lines 1-5 “receiving one or more of background objects to be drawn into a background layer; blending the foreground and background data to generate a composite image” Carlsen teaches blending, through a top-level blend core (blending the layers according to the stack order), blending the foreground layer and back ground layer; and
blending, through other blend cores, the assigned to-be-blended source layer with blending data output by a blend core of a previous level, wherein the other blend cores are any blend core except the top-level blend core, and the display data are blending data output by a blend core of a last level (Carlsen, Col. 10, lines 66-67, Col. 11 lines 1-8 “blending processor 58 blends the layers according to the stack order two layers at a time until all the layers are blended (316) and then passes the resultant data to the frame buffer (318). The blend processor processes layers from the stack by combining the last layer created with its parent on the stack (the previous layer received in time). The resultant image is then combined with the next layer in the stack hierarchy until all of the layers have been blended, the result of which may be blended with the contents of the frame buffer” and Fig. 6, Col. 11, lines 21-24 , “Nesting of layers in a parent/child hierarchy is achievable by sequencing the creation of layers along with blending of layers prior to the final blend time for the given page” Carlsen teaches blending, through other blend cores (the result of which may be blended with the contents of the frame buffer) in a sequencing the creation of layer along with blending of layer prior to the final layer of a last level (Fig. 6)
Hu, Trandafir and Carlsen are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Hu to combine with a number of the blend cores represents a hierarchy of the blend cores (as taught Carlsen) in order to obtain a number of the blend cores represents a hierarchy of the blend cores because Trandafir Carlsen can provide a blend core is performed by two layers and stored in a layer buffer and presents a hierarchy of the blend cores (referred to as a hierarchy for blending layers of the image) and the steps of blending the first layer and the second layer to generate display data (Carlsen, Fig. 6A, 6B, Col. 2 lines 42-44, Col. 2 lines 52-61). Doing so, it may provide only a current layer that is being drawn into is required to be maintained in uncompressed form. All other layers may be compressed to save memory space (Carlsen, Col. 4, lines 19-21).
Regarding Claim 12, a combination of Hu, Trandafir and Carlsen discloses the display processor unit according to claim 7, wherein the number of the blend cores represents a hierarchy of the blend cores;
a top-level blend core is used for blending the assigned to-be-blended source layer with a predefined background color; and
other blend cores are used for blending the assigned to-be-blended source layer with blending data output by a blend core of a previous level, wherein the other blend cores are any blend core except the top-level blend core, and the display data are blending data output by a blend core of a last level.
Claim 12 is substantially similar to claim 6 is rejected based on similar analyses.
Regarding Claim 14, a combination of Hu, Trandafir and Carlsen discloses the method according to claim 2, wherein a number of the blend cores represents a hierarchy of the blend cores; and the step of each blend core performs blending the assigned to-be-blended source layer to generate display data comprises:
blending, through a top-level blend core, the assigned to-be-blended source layer with a predefined background color; and
blending, through other blend cores, the assigned to-be-blended source layer with blending data output by a blend core of a previous level, wherein the other blend cores are any blend core except the top-level blend core, and the display data are blending data output by a blend core of a last level.
Claim 14 is substantially similar to claim 12 is rejected based on similar analyses.
Regarding Claim 18, a combination of Hu, Trandafir and Carlsen discloses the display processor unit according to claim 8, wherein the number of the blend cores represents a hierarchy of the blend cores;
a top-level blend core is used for blending the assigned to-be-blended source layer with a predefined background color; and
other blend cores are used for blending the assigned to-be-blended source layer with blending data output by a blend core of a previous level, wherein the other blend cores are any blend core except the top-level blend core, and the display data are blending data output by a blend core of a last level.
Claim 18 is substantially similar to claim 14 is rejected based on similar analyses.
Allowable Subject Matter
Dependent claims 3, 4, 5, 9, 10, 11, 15, 16, 17, 19 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding to independent claims 1, 7 the closest prior art references the examiner found are Hu et al. (U.S. 2022/0139352 A1) in view of Trandafir et al. (U.S. 20190043248 A1) have been made of record as teaching: obtaining a pixel (Hu, [0024];
searching for a to-be-blended source layer among multiple source layers, wherein the to-be- blended source layer comprises pixels to be displayed (Hu, [0024], [0028]); assigning the to-be-blended source layer to different blend cores, wherein the to-be-blended source layer corresponds one-to-one with the blend cores, wherein each of the blend cores blends an assigned to-be-blended source layer to generate display data (Hu, [0043]); obtaining a pixel coordinate of a target pixel point (Trandafir, [0004], [0032]); pixels to be displayed on a display axis, and the display axis is a coordinate axis of the pixel coordinate in a pixel coordinate system (Trandafir, [0063]), recited in claims 1, 7.
However, the art of record did not teach or suggest the claim taken as a whole and particular the limitation pertaining
wherein the display processor unit further comprises multiple layer selectors, and each layer selector corresponds one-to-one with one blend core; the source layers are provided with a predefined first number; and the step of searching for a to-be-blended source layer among multiple source layers comprises:
performing an iteration by each layer selector, wherein the iteration comprises:
obtaining a source layer of the first number identical to a first parameter, and determining whether an obtained source layer is the to-be-blended source layer, wherein an initial value of the first parameter is zero;
incrementing, when the obtained source layer is the to-be-blended source layer, a second parameter by one, wherein an initial value of the second parameter is negative one;
determining whether the second parameter incremented by one is identical to a number of the blend core;
when they are identical, taking the second parameter incremented by one as the second number of the obtained source layer and stopping the iteration, wherein the obtained source layer is the assigned source layer for the layer selector, and the assigned source layer is used for allocation to a blend core corresponding to each layer selector; and
when they are not identical, incrementing the first parameter by one and conducting a new round of the iteration, recited in claims 3, 9.
Claims 4, 5, 10, 11, 15, 19 are allowed because they depend on claims 3, 9.
Claims 16,17 are allowed because they depend on claims 4, 5.
Claim 20 is allowed because it depends on claim 10.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance”.
Conclusion
The Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHOA VU whose telephone number is (571)272-5994. The examiner can normally be reached 8:00- 4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KHOA VU/Examiner, Art Unit 2611
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611