DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see Remarks pg. 6, filed 10/2/2025, with respect to the rejection(s) of claim(s) 1-18 under 35 U.S.C. 103 have been fully considered. The examiner notes that the applicant’s arguments in said Remarks are directed to the newly amended limitations not previously presented, therefore, the arguments are persuasive. Therefore, the rejection has been withdrawn, however, upon further consideration, a new ground(s) of rejection is made in order to address the newly amended limitations.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over KIM; Chang-won et al. US 20150256891 A1 (hereafter Kim) and in further view of Koike; Akira et al. US 20120206493 A1 (hereafter Koike) and in further view of Hoisko; Jyrki US 7755566 B2 (hereafter Hoisko) and in further view of Daly; Scott et al. US 20210385533 A1 (hereafter Daly) and in further view of Watanabe; Mihoko et al. US 20150103250 A1 (hereafter Watanabe).
Regarding claim 1, “a control device comprising: an obtainer that obtains content and first type information indicating a type of the content; a determiner that performs type determination processing on the content obtained by the obtainer, to obtain second type information indicating a type of the content; and a generator that generates and outputs control information for increasing intensity of a presentation effect to be applied at a time of presentation of the content when the first type information and the second type information match, compared to when the first type information and the second type information do not match, wherein the generator calculates intensity of the presentation effect for each of a plurality of items of partial content included in the content, and performs filtering processing that temporally changes based on the intensity of the presentation effect calculated” Kim Fig. 3 and para 78-83 teaches element 221 genre extraction module for identifying the genre characteristics/types of the audio and video content; See also para 88 identifying content with received genre identifier and for a first type of genre and then determining that the content with first type of genre comprises different types of genres; see also para 90-94 element 224 genre determination module which compares the content genre received in the metadata and the content genre information from the analysis module and control module 225 controls the video and audio processing according to para 96-98 and 109-113. Kim para 13, 109 teaches obtaining video content comprising genre metadata and analyzing the content and metadata to determine the genre in the event that the content comprises different genres and wherein the controller is further configured to calculate the reliability of the genre information by comparing a first genre identification characteristic value corresponding to the genre information, which is acquired from the storage, and a second genre identification characteristic value which is acquired by analyzing the content. Regarding “wherein the generator calculates intensity of the presentation effect for each of a plurality of items of partial content included in the content, and performs filtering processing that temporally changes based on the intensity of the presentation effect calculated” Kim does not specifically use the term “intensity” to describe the visual effect changes performed in Kim, however, Kim does teaching the following with respect to performing filtering processing. For example, Kim para 50-55 teaches the following with respect to known filtering processes as it pertains to controlling the video display based on identified genre data:
…The video processor 130 may perform various signal processing operations with respect to the video signal.
[0051] In particular, the video processor 130 may process the video of the content by using a video setting value on a video mode corresponding to the genre of the content under the control of the controller 120. The video setting value may refer to a setting value on the video, such as color temperature, definition, contrast, brightness, etc., which may be pre-defined for each video mode corresponding to a content genre.
[0052] In addition, the video processor 130 may perform various video processing operations such as scaling, noise filtering, frame rate conversion, resolution conversion, etc.
[0053] The controller 120 controls an overall operation of the display apparatus 100. In particular, the controller 120 may extract the genre information of the content from the received metadata, analyze the content, calculate reliability of the genre information of the content, and control the video processor 130 to process the video of the content according to a result of the calculating the reliability. To achieve this, the display apparatus 100 may store genre identification characteristic values corresponding to a plurality of content genres, and video setting values for a plurality of video modes corresponding to the plurality of content genres.
[0054] Specifically, the controller 120 may extract the genre information of the content included in the metadata. In addition, the controller 120 may analyze the content and acquire content information that includes a genre identification characteristic value corresponding to the genre of the content.
[0055] In this case, the genre identification characteristic included in the content information may include at least one of a shot characteristic, a motion characteristic, a brightness characteristic, a color characteristic, an edge characteristic, a text characteristic, a saturation characteristic related to the video of the content, a Mel-Frequency Cepstral Coefficients (MFCC) characteristic, a periodicity characteristic, an energy characteristic, a Zero Crossing Rate (ZCR) characteristic, a pitch characteristic, and a frequency peak characteristic related to the audio of the content. The controller 120 may analyze at least one of the video and the audio of the received content and may acquire values of such genre identification characteristics.
All things considered, as discussed above, whereas Kim does not specifically use the term “intensity” to describe the visual effect changes performed in Kim, a person of ordinary skill in the art would reasonably infer that Kim changes the intensity of a presentation effect in an increasing manner wherein the prior art to Koike para 33-36, 40, 51-54, and 63-80 teaches extracting data from the broadcast signal by carrying out audio analysis, video analysis, etc., to extract, as genre information representing the type of program in order to effect picture quality based on values utilized in an increasing manner. See also Koike [0070, 0100-0104] The image quality parameter value can be gradually changed by a conventionally known method of changing the image quality parameter value, and is sufficiently changed in such a manner that, for example, the image quality parameter value set in advance corresponding to the genre before the genre changes is connected to the image quality parameter value set in advance corresponding to the genre after the genre has changed with a straight line as illustrated in the graph of (a) of FIG. 2, and the value is changed gradually by taking any plurality of values on the straight line including a median value of the image quality parameter values. See Koike para 17, 67 teaching when the genre of the displayed contents is changed to another different genre, the value of the image quality parameter is changed gradually to that set in advance corresponding to the another genre. As a result, even if the value of the image quality parameter changes to that corresponding to the changed genre every time the type of genre changes, the image quality of the contents displayed after the image quality is corrected based on the changed value does not change abruptly. With respect to the claimed “filtering processing” as claimed, Koike paragraph 57 teaches the following:
[0057] The parameter value adjustment section 41 adjusts the value of the image quality parameter based on an entry by the user accepted via the remote control receiver 35. Namely, when an entry by the user is accepted for adjusting the value of the image quality parameter that corresponds to the adjustment item while the adjustment item is displayed, adjustment is carried out based on this entry. Examples of this adjustment include video processing to a received video signal such as noise reduction, sharpness adjustment, and contrast adjustment.
The inferences drawn from the combination of Kim and Koike is further evidenced in the prior art to Hoisko para 44-45, 61-63, 72, 100-101 teaching employing visual effects on video image comprising gradient changes. Furthermore the prior art to Daly teaches utilizing gradual visual changes corresponding to the type of content being displayed (para 140; see also para 44-55).
Furthermore, the inferences drawn above are also further evidenced by Watanabe disclosing a video receiving device which has the ability to adjust an intensity of a noise reduction process to be performed on video content in accordance with a current image quality mode type (see Watanabe para 121-154 discussing determining an intensity of a noise reduction process to be performed on the decoded image, based on both the resolution of the target decoded image and the genre of the content of the video content).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to modify Kim’s invention for a receiver comprising modules for receiving audiovisual data including type information of the content and elements to extract the type information from the data and determine the type of content information by analyzing the content and comparing the analyzed content with the extracted type information and enabling a video processor to process and adjust visual effects of the received audiovisual content by further incorporating known elements of Koike for extracting data from the broadcast signal by carrying out audio analysis, video analysis, etc., to extract, as content type information representing the type of program in order to effect picture quality adjustments based on values utilized in an increasing manner because the prior art to Hoiske, Daly, and Watanabe teach how a person of ordinary skill in the art would employ visual effects on audiovisual content images using filtering processing for controlling intensity of presentation effects comprising gradient changes based on the type of content displayed to the viewer.
Regarding claim 2, “wherein in the type determination processing, the determiner inputs the content to a recognition model constructed by machine learning, and obtains type information of the content as the second type information, the type information being output by the inputting of the content to the recognition model” is further rejected on obviousness grounds as discussed in the rejection of claim 1 wherein Kim para 71 states the storage 250 may include a training module. The training module forms a model for each content genre which is expressed by genre identification characteristic values corresponding to content genres through training which uses a sample for each of the plurality of content, and may store the model in the storage 250. See also Kim para 90-94 element 224 teaches genre determination module which compares the content genre received in the metadata and the content genre information from the analysis module and control module 225 controls the video and audio processing according to para 96-98 and 109-113.
Regarding claim 3, “wherein the first type information indicates the type of the content as a whole, and the determiner determines a type of each of a plurality items of partial content included in the content” is further rejected on obviousness grounds as discussed in the rejection of claims 1-2 wherein Koike para 33-36, 40, 51-54, and 63-80 teaches extracting data from the broadcast signal by carrying out audio analysis, video analysis, etc., to extract, as genre information representing the type of program; see also Koike para 36 and 67-69, 91 when a genre of the displayed contents changes to another genre, the changes in the image quality parameter values from that corresponding to the genre prior to the change to that corresponding to the genre which has been changed is immediately carried out. On the other hand, in a case in which setting is made so that the adjustment described above is not accepted, when the genre of the displayed contents changes, the changes in the image quality parameter value from that corresponding to the genre prior to the change to that corresponding to the genre which has been changed is carried out gradually. By changing the image quality parameter values as such, it is possible to carry out image quality adjustment that fulfills the needs of the user.
Regarding claim 4, “wherein the obtainer obtains, as the first type information, information set as information indicating the type of the content from a device different from the control device” is further rejected on obviousness grounds as discussed in the rejection of claims 1-3 wherein Kim Fig. 3 and para 78-83 teaches element 221 genre extraction module for identifying the genre characteristics of the audio and video content; see also para 90-94 element 224 genre determination module which compares the content genre received in the metadata and the content genre information from the analysis module and control module 225 controls the video and audio processing according to para 96-98 and 109-113. See also Koike para 33-36, 40, 51-54, and 63-80 teaches extracting data from the broadcast signal by carrying out audio analysis, video analysis, etc., to extract, as genre information representing the type of program; see also para 36 when a genre of the displayed contents changes to another genre, the changes in the image quality parameter values from that corresponding to the genre prior to the change to that corresponding to the genre which has been changed is immediately carried out. On the other hand, in a case in which setting is made so that the adjustment described above is not accepted, when the genre of the displayed contents changes, the changes in the image quality parameter value from that corresponding to the genre prior to the change to that corresponding to the genre which has been changed is carried out gradually. By changing the image quality parameter values as such, it is possible to carry out image quality adjustment that fulfills the needs of the user.
Regarding claim 5, “wherein the obtainer obtains type information of the content as the first type information, the type information being obtained by analyzing the content obtained” is further rejected on obviousness grounds as discussed in the rejection of claims 1-4 wherein Kim Fig. 3 and para 78-83 teaches element 221 genre extraction module for identifying the genre characteristics of the audio and video content; see also para 90-94 element 224 genre determination module which compares the content genre received in the metadata and the content genre information from the analysis module. See also Koike para 33-36, 40, 51-54, and 63-80 teaches extracting data from the broadcast signal by carrying out audio analysis, video analysis, etc., to extract, as genre information representing the type of program.
Regarding claim 6, “wherein the control information includes information indicating in time series the intensity of the presentation effect at the time of presentation of the content” is further rejected on obviousness grounds as discussed in the rejection of claims 1-5 wherein Koike Fig. 2 and para 69-72 disclosing a time series for making changes based on genre changes. See also Kim para 83 disclosing a particular time at which a genre takes effect. See also Kim para 88-89.
Regarding claim 7, “wherein the generator performs processing of preventing a rapid change in the intensity of the presentation effect at the time of presentation of the content, when generating the control information” is further rejected on obviousness grounds as discussed in the rejection of claims 1-6 wherein Kim Fig. 3 and para 78-83 teaches element 221 genre extraction module for identifying the genre characteristics of the audio and video content; see also para 90-94 element 224 genre determination module which compares the content genre received in the metadata and the content genre information from the analysis module and control module 225 controls the video and audio processing according to para 96-98 and 109-113. See also Koike [0070, 0100-0104] The image quality parameter value can be gradually changed by a conventionally known method of changing the image quality parameter value, and is sufficiently changed in such a manner that, for example, the image quality parameter value set in advance corresponding to the genre before the genre changes is connected to the image quality parameter value set in advance corresponding to the genre after the genre has changed with a straight line as illustrated in the graph of (a) of FIG. 2, and the value is changed gradually by taking any plurality of values on the straight line including a median value of the image quality parameter values. The inference drawn from the combination of Kim and Koike is further evidenced in the prior art to Hoisko para 44-45, 61-63, 72, 100-101 teaching employing visual effects on video image comprising gradient changes. Furthermore the prior art to Daly teaches utilizing gradual visual changes corresponding to the type of content being displayed (para 140; see also para 44-55).
Regarding claim 8, “wherein the generator includes association information in which type information indicating the type of the content and the presentation effect to be applied at the time of presentation of the content of the type are associated in advance, and generates, as the control information, control information for applying the presentation effect associated in advance with the first type information, when generating the control information” is further rejected on obviousness grounds as discussed in the rejection of claims 1-7 wherein Kim Fig. 3 and para 78-83 teaches element 221 genre extraction module for identifying the genre characteristics of the audio and video content; see also para 90-94 element 224 genre determination module which compares the content genre received in the metadata and the content genre information from the analysis module and control module 225 controls the video and audio processing according to para 96-98 and 109-113. See also Koike [0070, 0100-0104] The image quality parameter value can be gradually changed by a conventionally known method of changing the image quality parameter value, and is sufficiently changed in such a manner that, for example, the image quality parameter value set in advance corresponding to the genre before the genre changes is connected to the image quality parameter value set in advance corresponding to the genre after the genre has changed with a straight line as illustrated in the graph of (a) of FIG. 2, and the value is changed gradually by taking any plurality of values on the straight line including a median value of the image quality parameter values. The inference drawn from the combination of Kim and Koike is further evidenced in the prior art to Hoisko para 44-45, 61-63, 72, 100-101 teaching employing visual effects on video image comprising gradient changes. Furthermore the prior art to Daly teaches utilizing gradual visual changes corresponding to the type of content being displayed (para 140; see also para 44-55).
Regarding claim 9, “wherein the generator generates, as the control information, control information for increasing intensity of at least one of a sound effect or a video effect as the presentation effect at the time of presentation of the content” is further rejected on obviousness grounds as discussed in the rejection of claims 1-8 wherein Koike [0070, 0100-0104] The image quality parameter value can be gradually changed by a conventionally known method of changing the image quality parameter value, and is sufficiently changed in such a manner that, for example, the image quality parameter value set in advance corresponding to the genre before the genre changes is connected to the image quality parameter value set in advance corresponding to the genre after the genre has changed with a straight line as illustrated in the graph of (a) of FIG. 2, and the value is changed gradually by taking any plurality of values on the straight line including a median value of the image quality parameter values. The inference drawn from the combination of Kim and Koike is further evidenced in the prior art to Hoisko para 44-45, 61-63, 72, 100-101 teaching employing visual effects on video image comprising gradient changes. Furthermore the prior art to Daly teaches utilizing gradual visual changes corresponding to the type of content being displayed (para 140; see also para 44-55).
Regarding claim 10, “wherein the generator receives an operation of setting an intensity range of the presentation effect from a user, and generates the control information for controlling the presentation effect within the intensity range set through the operation” is further rejected on obviousness grounds as discussed in the rejection of claims 1-8 wherein Koike para 35-36, 57 teaches user made settings interpreted as an operation made by a user to set a mode of operation comprising a range for presentation effects.
Regarding claim 11, “a control method comprising: obtaining content and first type information indicating a type of the content; performing type determination processing on the content obtained, to obtain second type information indicating a type of the content; and generating and outputting control information for increasing intensity of a presentation effect to be applied at a time of presentation of the content when the first type information and the second type information match, compared to when the first type information and the second type information do not match” and claim 12, “a non-transitory computer-readable recording medium having recorded thereon a program for causing a computer to execute the control method according to claim 11” Regarding the method claim 11 and non-transitory computer readable media claims 12, the claims are grouped and rejected with the device claims 1-10 because the elements/components of the device are met by the disclosure of the apparatus and methods of the reference(s) as discussed in the rejection of claims 1-10 and because the elements/components of the device are easily converted into elements of computer implemented methods and non-transitory computer-readable recording medium by one of ordinary skill in the art.
Regarding claim 13, “wherein the filtering processing is one of low-pass filter processing, noise removal processing, or smoothing processing” is further rejected on obviousness grounds as discussed in claims 1-12 wherein Kim para 50-55 teaches the following with respect to known filtering processes as it pertains to controlling the video display based on identified genre data; See also Koike paragraph 57 teaches with respect to the claimed “filtering processing” as claimed Koike teaches video adjustment include video processing to a received video signal such as noise reduction, sharpness adjustment, and contrast adjustment. See also Watanabe disclosing a video receiving device which has the ability to adjust an intensity of a noise reduction process to be performed on video content in accordance with a current image quality mode (see Watanabe para 121-154 discussing determining an intensity of a noise reduction process to be performed on the decoded image, based on both the resolution of the target decoded image and the genre of the content of the video content).
Regarding claim 14, “wherein the filtering processing is processing using a moving average” is further rejected on obviousness grounds as discussed in claims 1-13 wherein a moving average is interpreted as an average of data over a moving period/window as Watanabe teaches “[0331] Specifically, the 3D noise reduction process is a process in which an image after a noise reduction is generated by working out an average, for each pixel, of (i) a target region during a target frame and (ii) a target region during one or more reference frames before and/or after the target frame timewise.”
Regarding claim 15, “wherein the generator generates and outputs control information for changing intensity of a presentation effect to be applied at a time of presentation of the content, the control information being generated according to combination of the first type information and the second type information” is further rejected on obviousness grounds as discussed in claims 1-14 wherein Kim Fig. 3 and para 78-83 teaches element 221 genre extraction module for identifying the genre characteristics/types of the audio and video content; see also para 90-94 element 224 genre determination module which compares the content genre received in the metadata and the content genre information from the analysis module and control module 225 controls the video and audio processing according to para 96-98 and 109-113. Kim para 13, 109 teaches obtaining video content comprising genre metadata and analyzing the content and metadata to determine the genre in the event that the content comprises different genres and wherein the controller is further configured to calculate the reliability of the genre information by comparing a first genre identification characteristic value corresponding to the genre information, which is acquired from the storage, and a second genre identification characteristic value which is acquired by analyzing the content. See also Koike para 33-36, 40, 51-54, and 63-80 teaches extracting data from the broadcast signal by carrying out audio analysis, video analysis, etc., to extract, as genre information representing the type of program in order to effect picture quality based on values utilized in an increasing manner. See also Koike [0070, 0100-0104] The image quality parameter value can be gradually changed by a conventionally known method of changing the image quality parameter value, and is sufficiently changed in such a manner that, for example, the image quality parameter value set in advance corresponding to the genre before the genre changes is connected to the image quality parameter value set in advance corresponding to the genre after the genre has changed with a straight line as illustrated in the graph of (a) of FIG. 2, and the value is changed gradually by taking any plurality of values on the straight line including a median value of the image quality parameter values. See Koike para 17, 67 teaching when the genre of the displayed contents is changed to another different genre, the value of the image quality parameter is changed gradually to that set in advance corresponding to the another genre. As a result, even if the value of the image quality parameter changes to that corresponding to the changed genre every time the type of genre changes, the image quality of the contents displayed after the image quality is corrected based on the changed value does not change abruptly.
Regarding claim 16, “wherein the filtering processing is one of low-pass filter processing, noise removal processing, or smoothing processing” is further rejected on obviousness grounds as discussed in claims 1-15 wherein Watanabe disclosing a video receiving device which has the ability to adjust an intensity of a noise reduction process to be performed on video content in accordance with a current image quality mode type (see Watanabe para 121-154 discussing determining an intensity of a noise reduction process to be performed on the decoded image, based on both the resolution of the target decoded image and the genre of the content of the video content).
Regarding claim 17, “wherein the filtering processing is processing using a moving average” s further rejected on obviousness grounds as discussed in claims 1-16 wherein a moving average is interpreted as an average of data over a moving period/window as Watanabe teaches “[0331] Specifically, the 3D noise reduction process is a process in which an image after a noise reduction is generated by working out an average, for each pixel, of (i) a target region during a target frame and (ii) a target region during one or more reference frames before and/or after the target frame timewise.”
Regarding claim 18, “wherein the generating and outputting the control information includes generating and outputting control information for changing intensity of a presentation effect to be applied at a time of presentation of the content, the control information being generated according to combination of the first type information and the second type information” is further rejected on obviousness grounds as discussed in claims 1-17 wherein Kim Fig. 3 and para 78-83 teaches element 221 genre extraction module for identifying the genre characteristics/types of the audio and video content; see also para 90-94 element 224 genre determination module which compares the content genre received in the metadata and the content genre information from the analysis module and control module 225 controls the video and audio processing according to para 96-98 and 109-113. Kim para 13, 109 teaches obtaining video content comprising genre metadata and analyzing the content and metadata to determine the genre in the event that the content comprises different genres and wherein the controller is further configured to calculate the reliability of the genre information by comparing a first genre identification characteristic value corresponding to the genre information, which is acquired from the storage, and a second genre identification characteristic value which is acquired by analyzing the content. See also Koike para 33-36, 40, 51-54, and 63-80 teaches extracting data from the broadcast signal by carrying out audio analysis, video analysis, etc., to extract, as genre information representing the type of program in order to effect picture quality based on values utilized in an increasing manner. See also Koike [0070, 0100-0104] The image quality parameter value can be gradually changed by a conventionally known method of changing the image quality parameter value, and is sufficiently changed in such a manner that, for example, the image quality parameter value set in advance corresponding to the genre before the genre changes is connected to the image quality parameter value set in advance corresponding to the genre after the genre has changed with a straight line as illustrated in the graph of (a) of FIG. 2, and the value is changed gradually by taking any plurality of values on the straight line including a median value of the image quality parameter values. See Koike para 17, 67 teaching when the genre of the displayed contents is changed to another different genre, the value of the image quality parameter is changed gradually to that set in advance corresponding to the another genre. As a result, even if the value of the image quality parameter changes to that corresponding to the changed genre every time the type of genre changes, the image quality of the contents displayed after the image quality is corrected based on the changed value does not change abruptly.
.
CONCLUSION
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALFONSO CASTRO whose telephone number is (571)270-3950. The examiner can normally be reached on Monday to Friday from 10am to 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Flynn can be reached. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALFONSO CASTRO/Primary Examiner, Art Unit 2421