DETAILED ACTION
The present Office action is in response to the amendments filed on 24 NOVEMBER 2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1, 5, 8, 12, 15, and 19 have been amended. No claims have been added or cancelled. Claims 1-20 are pending and herein examined.
Response to Arguments
Applicant's arguments filed 24 NOVEMBER 2025 have been fully considered but they are not persuasive.
With regard to claim 1, rejected under 35 U.S.C. § 103 as being unpatentable over U.S. Publication No. 2012/0281906 A1 (hereinafter “Appia”) in view of U.S. Publication No. 2010/0103168 A1 (hereinafter “Jung”), Applicant’s position is Appia does not disclose the stereoscopic image generation box as a separate device from other devices. See Remarks, pp. 2-3.
The Examiner acknowledges Appia discloses the stereoscopic image generation box as a separate component, but not in sufficient detail to conclude it is a separate device. Implemented as a separate component, the stereoscopic image generation box could be present within the electronic display. However, Jung’s disclosure describes an image processing device that is separate from the image displaying device. See Jung, FIG. 4. Jung also discloses the image process device is connected to the image display device with a high definition multimedia interface (HDMI), which corresponds to the newly claimed data transmission line. There are no arguments presented towards Jung in the Remarks.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: (1) image receiving and detecting unit, (2) depth information analysis unit, (3) image processing unit, (4) synthesis unit, and (5) data transmission unit in claims 1 and 15, and (6) resolution adjustment unit in claims 2 and 16.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Upon review of the specification, the following support is the corresponding structure:
(2) depth information analysis unit, (3) image processing unit, (4) synthesis unit, and (6) resolution adjustment unit – [0023], “The depth information analysis unit 220, the image processing unit 230, the resolution adjustment unit 240 and the synthesis unit 250 can be realized by such as a circuit, a circuit board, a chip, a computer code or a recording medium for storing computer code.”
(1) image receiving and detecting unit and (5) data transmission unit – [0023] describes the corresponding structure as an interface and the equivalent thereof is any means by which to receive or transmit data.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 6-8, 13-15, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2012/0281906 A1 (hereinafter “Appia”) in view of U.S. Publication No. 2010/0103168 A1 (hereinafter “Jung”).
Regarding claim 1, Appia discloses a stereoscopic image generation box (FIG. 1, conversion device 110. [0024], “the conversion device 110 operations for converting a 2D image into its corresponding 3D image”), comprising:
an image receiving and detecting unit, used for receiving a two-dimensional image from an image source through an image transmission line plugged into the stereoscopic image generation box (FIG. 1, conversion device 110 connected to encoding device 106 receiving images from camera 104. [0022], “the conversion device 110 receives and processes such bit stream directly from the encoding device 106 in real-time.” [0022], “the encoding device 106 outputs such bit stream directly to the conversion device 110 view a communication channel (e.g., Ethernet, Internet, or wireless communication channel”);
a depth information analysis unit, used for obtaining a depth information according to the two-dimensional image (FIG. 2, generate depth map 204. [0024], “FIG. 2, at a step 202, […] the conversion device 110: (a) detects and classifies various low level features […] and high level features […] within the 2D image; and (b) performs a mean shift clustering operation to segment the 2D image into regions. At a next step 204, in response to such features, and in response to such information from the training database, the conversion device 110 generates a depth map that assigns suitable depth values to such regions within the 2D image”);
an image processing unit, used for converting the two-dimensional image into a left-eye image and a right-eye image according to the depth information (FIG. 2, synthesize left view 210 and synthesize right view 212. [0004], “In response to the depth map, left and right views of the three-dimensional visual image are synthesized.” [0034-0036] describes manipulation of the pixels in the 2D image with the depth map for generating left and right views);
a data transmission unit, used for outputting the stereoscopic image to a display (FIG. 1, conversion device 110 connected to display device 112. [0021], “A conversion device 110 […] (e) outputs the converted video sequence to a display device […]. The display device 112: (a) receives the converted video sequence from the conversion device 110; and (b) in response thereto, displays such 3D images (e.g., 3D images of the object 102 and its surrounding foreground and background), which are viewable by a human user 114”)
wherein the image source(FIG. 1 encoding device 106 and display device 112 are disclosed as separate devices, see [0022]), the two-dimensional image is transmitted through the image transmission line ([0022], “the encoding device 106 outputs such bit stream directly to the conversion device 110 view a communication channel (e.g., Ethernet, Internet, or wireless communication channel.” Note, the bit stream includes the 2D encoded image captured by camera 104)
Appia fails to expressly disclose a data transmission line plugged into the display; and
a synthesis unit, used for synthesizing the left-eye image and the right-eye image to generate a stereoscopic image;
the stereoscopic image generation box and the electronic device including the display are separate devices, and the stereoscopic image is transmitted through the data transmission line.
However, Jung teaches a data transmission line plugged into the display (FIG. 4, the arrows between image processing device 200 and image displaying device 300. [0077], “the image processing device 200 and the image displaying device 300 may transmit and receive data via HDMI”); and
a synthesis unit, used for synthesizing the left-eye image and the right-eye image to generate a stereoscopic image (FIG. 4 depicts image processing device 200 with 3D image converting unit 240 of FIG. 5 that includes stereo rendering unit 650 of FIG. 6. [0103], “The stereo rendering unit 650 generated a left-eye image and a right-eye image by using video images received from the video data decoding unit 210 and a depth map 640 received from the depth map buffer unit 640, and generates a 3D format image including both of the left-eye image and the right-eye image”);
the stereoscopic image generation box and the electronic device including the display are separate devices (FIG. 4, image processing device 200 is a separate device from image displaying device 300), and the stereoscopic image is transmitted through the data transmission line ([0059], “The image processing device 200 is a device for decoding video data, generating 2D video images, and […] converting the 2D video images to 3D images by using metadata for disk management and transmitting the 3D images to the image displaying device 300.” [0077], “the image processing device 200 and the image displaying device 300 may transmit and receive data via HDMI”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have generated a composite image for appropriately rendering two images, as taught by Jung ([0103]), in Appia’s disclosure. One would have been motivated to modify Appia’s disclosure, by incorporating Jung’s disclosure, because it is an obvious combination of prior art elements to create a synthesized composite image that is supported by the display and any type of auxiliary devices (e.g., head-mounted display, see Jung, FIG. 8A) to predictably yield the result of correctly displaying stereoscopic video.
Regarding claim 6, Appia and Jung disclose every limitation of claim 1, as outlined above. Additionally, Appia discloses wherein the image receiving and detecting unit and the image source are linked through an image transmission line ([0022], “the encoding device 106 outputs such bit stream directly to the conversion device 110 view a communication channel (e.g., Ethernet, Internet, or wireless communication channel”).
Regarding claim 7, Appia and Jung disclose every limitation of claim 1, as outlined above. Additionally, Jung discloses wherein the data transmission unit and the display are linked through a data transmission line ([0077], “the image processing device 200 and the image displaying device 300 may transmit and receive data via HDMI”). The same motivation of claim 1 applies to claim 7.
Regarding claim 8, the limitations are the same as those in claim 1; however, written in process form instead of machine form. Therefore, the same rationale of claim 1 applies to claim 8.
Regarding claim 13, the limitations are the same as those in claim 6. Therefore, the same rationale of claim 6 applies equally as well to claim 13.
Regarding claim 14, the limitations are the same as those in claim 7. Therefore, the same rationale of claim 7 applies equally as well to claim 14.
Regarding claim 15, the limitations are the same as those in claim 1. Therefore, the same rationale of claim 1 applies equally as well to claim 15.
Regarding claim 20, the limitations are the same as those in claim 7. Therefore, the same rationale of claim 7 applies equally as well to claim 20.
Claim(s) 2, 9, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2012/0281906 A1 (hereinafter “Appia”) in view of U.S. Publication No. 2010/0103168 A1 (hereinafter “Jung”), and further in view of U.S. Publication No. 2014/0333739 A1 (hereinafter “Yang”).
Regarding claim 2, Appia and Jung disclose every limitation of claim 1, as outlined above. Appia and Jung fail to expressly disclose further comprising: a resolution adjustment unit, used for automatically converting resolutions of the left-eye image and the right-eye image according to a frame resolution of the display.
However, Yang teaches further comprising: a resolution adjustment unit, used for automatically converting resolutions of the left-eye image and the right-eye image according to a frame resolution of the display ([0069], “ the left image processing unit (102) and the right image processing image (103) may convert the resolution of the left image and the resolution of the right image, which may be inputted in diverse values, in accordance with the resolution of the corresponding display device.” Note, [0065] describes the 3D image display device can correspond to a set-top box, which is consistent with Appia’s conversion device 110 in FIG. 1 and Jung’s image processing device 200 in FIG. 4).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have adjusted a resolution to a supported resolution, as taught by Yang ([0069]), in Appia and Jung’s invention. One would have been motivated to modify Appia and Jung’s invention, by incorporating Yang’s disclosure, because it is an obvious use of a known technique of scaling video resolution to meet a target resolution for display devices ensuring a desired quality output.
Regarding claim 9, the limitations are the same as those in claim 2. Therefore, the same rationale of claim 2 applies equally as well to claim 9.
Regarding claim 16, the limitations are the same as those in claim 2. Therefore, the same rationale of claim 2 applies equally as well to claim 16.
Claim(s) 3, 4, 10, 11, 17, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2012/0281906 A1 (hereinafter “Appia”) in view of U.S. Publication No. 2010/0103168 A1 (hereinafter “Jung”), further in view of U.S. Patent No. 8,108,633 B2 (hereinafter “Munshi”), and even further in view of U.S. Publication No. 2013/0286156 A1 (hereinafter “Golas”).
Regarding claim 3, Appia and Jung disclose every limitation of claim 1, as outlined above. Additionally, Appia discloses the image processing unit (FIG. 2, synthesize left view 210 and synthesize right view 212. [0004], “In response to the depth map, left and right views of the three-dimensional visual image are synthesized.” [0034-0036] describes manipulation of the pixels in the 2D image with the depth map for generating left and right views).
Appia and Jung fail to expressly disclose wherein the image receiving and detecting unit is further used for detecting an image resolution of the image source; the image processing unit selects processing chips from a plurality of processing chips according to the image resolution.
However, Golas teaches wherein the image receiving and detecting unit is further used for detecting an image resolution of the image source ([0050], “when decoding a video stream the processor begins with a decode information set including a fully-specified frame. In these invention embodiments, the decode information set contains a fully specified image and parameter sets specifying, for example, one or more of video resolution, size […].” Note, each of Appia and Jung decode an incoming video stream for which, according to Golas’ teachings, resolution and size of the video will be included and known).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have decoded frame information including resolution and size, as taught by Golas ([0050]), in Appia and Jung’s invention. One would have been motivated to modify Appia and Jung’s invention, by incorporating Golas’ disclosure, to accurately decode the input bit steam as the metadata represents decoding characteristics for reproducing the original frames (Golas: [0004-0005]).
Appia, Jung, and Golas fail to expressly disclose the image processing unit selects processing chips from a plurality of processing chips according to the image resolution.
However, Munshi teaches the image processing unit selects processing chips from a plurality of processing chips according to the image resolution (col. 1, l. 55 to col. 2, l. 16 describes allocating resources to processing units for a task. Also see claim 1, “selecting one of the plurality of processing units according to a comparison among matching scores of the processing units, wherein the processing capabilities indicate whether the selected processing unit supports the dedicated local storage and whether the selected processing unit is capable of the hardware support for the graphics operation.” Col. 8, ll. 3-10, “process 400 may select a set of physical compute devices from attached physical compute devices at block 405. The selection may be determined based on a matching between the compute capability requirement against the compute capabilities stored in the capability data structure.” Note, Munshi discloses generalized teachings for assigning tasks to processors based on leveraging processing resources and a resolution and size of an image contribute to the complexity of a processing task requiring consideration for the leveraging of processing resources).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have leveraged processing resources for a task, as taught by Munshi (col. 1, l. 55 to col. 2, l. 16), in Appia, Jung, and Golas’ invention. One would have been motivated to modify Appia, Jung, and Golas’ invention, by incorporating Munshi’s disclosure, to improve leveraging of processing resources based on task requirements (Munshi: col. 1, ll. 43-51).
Regarding claim 4, Appia, Jung, Golas, and Munshi disclose every limitation of claim 3, as outlined above. Additionally, Appia discloses wherein the image receiving and detecting unit is further used for detecting an image size of the image source; the image processing unit further selects processing chips from the processing chips according to the image size to convert the two-dimensional image into the left-eye image and the right-eye image (FIG. 2, synthesize left view 210 and synthesize right view 212. [0004], “In response to the depth map, left and right views of the three-dimensional visual image are synthesized.” [0034-0036] describes manipulation of the pixels in the 2D image with the depth map for generating left and right views. Note, the rejection of claim 3 establishes Golas detects frame resolution and size and then the combination with Munshi is to leverage processor resources dependent on the task and image manipulation is dependent on its complexity which factors in resolution and size. That is to say, the larger the resolution and size, the more resources need to be leveraged by a processor(s)). The same motivation of claim 3 applies to claim 4.
Regarding claim 10, the limitations are the same as those in claim 3. Therefore, the same rationale of claim 3 applies equally as well to claim 10.
Regarding claim 11, the limitations are the same as those in claim 4. Therefore, the same rationale of claim 4 applies equally as well to claim 11.
Regarding claim 17, the limitations are the same as those in claim 3. Therefore, the same rationale of claim 3 applies equally as well to claim 17.
Regarding claim 18, the limitations are the same as those in claim 4. Therefore, the same rationale of claim 4 applies equally as well to claim 18.
Claim(s) 5, 12, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2012/0281906 A1 (hereinafter “Appia”) in view of U.S. Publication No. 2010/0103168 A1 (hereinafter “Jung”), and further in view of U.S. Publication No. 2016/0349503 A1 (hereinafter “Grossmann”).
Regarding claim 5, Appia and Jung disclose every limitation of claim 1, as outlined above. Appia and Jung fail to expressly disclose wherein the display is a naked eye stereoscopic display; the display detects a human eye tracking information; the synthesis unit renders the left-eye image and the right-eye image according to the human eye tracking information to generate the stereoscopic image.
However, Grossmann teaches wherein the display is a naked eye stereoscopic display ([0010], “autostereoscopic display.” [0004], “view autostereoscopic images “with the naked eye””); the display detects a human eye tracking information ([0026], “a video camera forming part of an eye tracking or head tracking system 20 is attached to the display 14 and communicates with the computer system 10”); the synthesis unit renders the left-eye image and the right-eye image according to the human eye tracking information to generate the stereoscopic image ([0027], “The head tracking system 20 keeps track of any movements of the head of the user and signals these movements to the computer system, which will then adapt the information displayed on the screen 16 to the changed position of the user”).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have implemented an autostereoscopic system, as taught by Grossmann ([0026-0027]), in Appia and Jung’s invention. One would have been motivated to modify Appia and Jung’s invention, by incorporating Grossmann’s disclosure, to provide a more comfortable experience for a user (Grossmann: [0003]).
Regarding claim 12, the limitations are the same as those in claim 5. Therefore, the same rationale of claim 5 applies equally as well to claim 12.
Regarding claim 19, the limitations are the same as those in claim 5. Therefore, the same rationale of claim 5 applies equally as well to claim 19.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STUART D BENNETT whose telephone number is (571)272-0677. The examiner can normally be reached Monday - Friday from 9:00 AM - 5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STUART D BENNETT/Examiner, Art Unit 2481