DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Wang
Claims 1, 8-10, and 17-19 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Wang(CN 111,246,196; English transcription is provided.).
As per claim 1, Wang teaches a method for playing a video having a three-dimensional (3D) effect, the method being performed by a terminal and comprising:
displaying a playing interface of an original video, the playing interface comprising a 3D effect control; and playing a target video having the 3D effect in the playing interface in response to an operation on the 3D effect(“step S101, obtaining the to-be-processed video, and determining the to-be-processed target object corresponding to video, video to be processed comprises at least two frames of video sequence image.” in Para.[0033], “step S103, images respectively for the first video overlapping processing, processing to obtain with 3D display effect after the target background image has been determined, wherein the target background image in a white line with a width of at least one setting.” in Para.[0040]),
the 3D effect being generated by a moving object that is included in the target video and that moves between target occlusion images in a target raw image, an occlusion method of each of the target raw image and the target occlusion image in the target raw image being determined based on a moving track of the moving object in a foreground image sequence, and the foreground image sequence comprising at least one frame of a foreground image that comprises the moving object and that is obtained by segmenting a raw image of the original video(“in step S102, extracting the target object from each video sequence image area where the image part, to generate the corresponding first image.” in Para.[0036], “wherein each first image corresponding to a background image, and each frame target a target background image is the same, i.e., requires the target multi-frame as the background image during the video processing, and the target and the size of the background image of the first image are the same. the target background image obtained by shielding the image can be viewed as a complete frame of background image from at least one white line, of the reference object as the superimposed first image wherein the white lines and the target background image, overlapped with the target background image after the first image, white lines can be used as a reference object of the target object movement, the width of the white line can be set according to the size of the target background image, this is not as limited” in Para.[0041], “Specifically, for each of the first image, the first image and the target background image is superposed, the superposition process is using the image portion of the target object is located in the region of the first image replacement target background image corresponding to the area of the image part (or a portion using the image of the target object where the area shaded image in the first image portion of the target background image corresponding to the area), to obtain the corresponding superposition image. white lines in the target background image in the superimposed image mainly plays the role of two aspects, on the one hand it can be used as reference object of the target object movement, on the other hand, it can be regarded as dividing the overlapped image layers, specifically, white line itself can be understood as in the boundary layer, the boundary layer of the next layer as a background image layer, when the target object is not overlapped in the image shielding white lines, the viewer when viewing the superposed image, it will generate the target object in the visual effect of the background image layer and the boundary layer; When the overlapping shielding white lines of the target object in the image, the viewer when viewing the superposed image, it will generate target object visual effect on the boundary layer.” In Para.[0042], “Because the target object is shooting lens motion and/or photographing object lens move away from the video to the video, then the different corresponding to the first image in the superimposed image, the target object and the relative position of the white lines are different from each other, that is to say, each first image corresponding to the superimposed image in some overlapping of the target object in the image without shielding white lines, some superposition of the target object in the image shielding white lines, when the superposed image is formed during playing video, because the viewer will generate visual effect target object motion in the different layers, then will bring "scene A", generate naked eye 3D display effect to the eye of the viewer.” in Para.[0043]).
As per claim 8, Wang teaches wherein a motion trend of the moving object is a motion trend towards an outer side of a screen of a terminal(Fig. 4C, “The first image and the target background image size are the same, and the first image is corresponding to the position of the target background image, human in the first image A1 area corresponding to the target background image area 2, and not overlapped with the white line; the area of the person in the second image A2 and the target background image area 1 and area 2 are provided with intersection, namely covering line 1; the area of the person in the third image A3 and the area 1 of the target background image, area 2 and area 3 are provided with intersection, namely covering the line 1 and line 2. Then, as shown in FIG. 4a, the first image A1 and the target background image to obtain the superposed image 1, human in the area 2, as shown in FIG. 4b, the first image A2 and the target background image to obtain the overlapped image 2, human covering part area of the line 1, as shown in FIG. 4c; overlapping image obtained by the first image A3 and the target background image 3, human covering part area of the line 1 and the part area of the line 2, then according to the superposed image 1; overlapping image 2 and overlapped image 3 to obtain the processed video, human relative to the two white lines from the lens to close to the lens motion, in the moving process, firstly people does not shield the line 1 and line 2, then the human to line 1 shielding the line 2 does not shield; the last person shields the line 1 and the line 2, two white lines as the boundary layer, the viewer will generate human from the line 1 and line 2 after moving to the line 1 and line 2 of the layer of the visual impression, namely generating the naked eye 3 D display effect” in Para.[0044]).
As per claim 9, Wang teaches wherein a display level of the foreground image is higher than a display level of the target occlusion image, and the display level of the target occlusion image is higher than a display level of a background image in the raw image(Fig. 4A-4C, “The first image and the target background image size are the same, and the first image is corresponding to the position of the target background image, human in the first image A1 area corresponding to the target background image area 2, and not overlapped with the white line; the area of the person in the second image A2 and the target background image area 1 and area 2 are provided with intersection, namely covering line 1; the area of the person in the third image A3 and the area 1 of the target background image, area 2 and area 3 are provided with intersection, namely covering the line 1 and line 2. Then, as shown in FIG. 4a, the first image A1 and the target background image to obtain the superposed image 1, human in the area 2, as shown in FIG. 4b, the first image A2 and the target background image to obtain the overlapped image 2, human covering part area of the line 1, as shown in FIG. 4c; overlapping image obtained by the first image A3 and the target background image 3, human covering part area of the line 1 and the part area of the line 2, then according to the superposed image 1; overlapping image 2 and overlapped image 3 to obtain the processed video, human relative to the two white lines from the lens to close to the lens motion, in the moving process, firstly people does not shield the line 1 and line 2, then the human to line 1 shielding the line 2 does not shield; the last person shields the line 1 and the line 2, two white lines as the boundary layer, the viewer will generate human from the line 1 and line 2 after moving to the line 1 and line 2 of the layer of the visual impression, namely generating the naked eye 3 D display effect” in Para.[0044]).
As per claim 10, Wang teaches one or more non-transitory computer readable media comprising computer readable instructions which, when executed by a processor, configure a data processing system to perform(“the electronic device 30 may include a processing device (e.g., a central processor, a graphics processor, etc.) 301, which may be loaded into the program 303 in the random access memory (RAM), a read only memory (ROM) 302 a program or according to the storage from the storage device 308 to perform various appropriate actions and processing. In the RAM 303, also stores various programs and data required for operation with an electronic device 30. processing device 301, ROM 302 and RAM 303 are connected to each other through the bus 304. an input/output (I/O) interface 30 5 is also connected to the bus 30 4” in Para.[0103]) and the other limitations in the claim 10 has been discussed in the rejection claim 1 and rejected under the same rationale.
As per claim 17, the limitations in the claim 17 has been discussed in the rejection claim 8 and rejected under the same rationale.
As per claim 18, the limitations in the claim 18 has been discussed in the rejection claim 9 and rejected under the same rationale.
As per claim 19, Wang teaches a data processing system, comprising: a processor; and memory storing computer readable instructions which, when executed by the processor, configure the data processing system to perform (“the electronic device comprises a memory and a processor, wherein the processor described herein may be referred to as lower processing device described herein 301, memory may include below in read only memory (ROM) 302, a random access memory (RAM) 303” in Para.[0102]) and the other limitations in the claim 19 has been discussed in the rejection claim 1 and rejected under the same rationale.
Allowable Subject Matter
Claims 2-7, 11-16 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUNGHYOUN PARK whose telephone number is (571)270-1333. The examiner can normally be reached M - Thur 6:00 am - 4 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI Q TRAN can be reached on (571)272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SUNGHYOUN PARK/Examiner, Art Unit 2484