Prosecution Insights
Last updated: April 19, 2026
Application No. 17/754,920

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND ARTIFICIAL INTELLIGENCE SYSTEM

Non-Final OA §103
Filed
Apr 15, 2022
Examiner
LEE, MICHAEL CHRISTOPHER
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Saturn Licensing LLC
OA Round
3 (Non-Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
86%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
80 granted / 136 resolved
+3.8% vs TC avg
Strong +27% interview lift
Without
With
+27.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
54 currently pending
Career history
190
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§103
DETAILED ACTION Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/7/2026 has been entered. Response to Amendment Applicant’s Amendment and remarks dated 1/7/2026 have been considered. Claims 1-18 and 21-22 are pending. Response to Arguments On page 9 of Applicant’s 1/7/2026 Amendment and remarks, Applicant identifies paras. 0071 and 0075 as written description support for the amendments to independent claims 1 and 18. The examiner agrees that such paragraphs, in addition to at least paras. 0062-0063, provide sufficient written description support for the amendments to claims 1 and 18. On pages 9-10 of Applicant’s 1/7/2026 Amendment and remarks, with respect to the rejections of claims 1 and 18 under 35 U.S.C. 103 as obvious in view of ATKINS and DALY, Applicant argues that the prior art does not teach the “wherein the recognition gap corresponds to a difference between a signal representation of the reproduction content output to the user and a signal representation of the reproduction content intended by the creator” limitation. The examiner acknowledges that ATKINS and DALY do not explicitly teach the newly-added limitation. The previous rejections to claims 1 and 18 under 35 U.S.C. 103 are withdrawn. However, new grounds of rejection under 35 U.S.C. 103 in view of the ATKINS and WALLACE references are provided below. On page 10 of Applicant’s 1/7/2026 Amendment and remarks, with respect to the rejections of dependent claims 9 and 11-17, Applicant argues that other art cited in the previous office action does not teach the “wherein the recognition gap corresponds to a difference between a signal representation of the reproduction content output to the user and a signal representation of the reproduction content intended by the creator” limitation. The examiner agrees that the JEON, CHOU, KIM, and PARK references do not teach this limitation. However, as explained above, this new limitation is taught by WALLACE, and new grounds of rejection in view of the WALLACE reference are provided below. Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-8, 10, 18, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over US 20130088644 A1, hereinafter referenced as ATKINS, in view of US 20160205370 A1, hereinafter referenced as WALLACE. Regarding Claim 1 ATKINS teaches: An information processing device comprising circuitry configured to: (ATKINS, para. 0021: “FIG. 1 is a schematic diagram of a video data distribution system.”; ATKINS, para. 0047: “Data processing stage 18 may comprise, for example a programmed processor executing suitable software and/or firmware instructions, configurable hardware, such as a suitably configured field programmable gate array (FPGA), hardware logic circuits (for example provided in one or more integrated circuits), or suitable combinations thereof. Data processing stage 18 may be hosted in a computer, a video-handling device, such as a transcoder, set-top box, digital video recorder, or the like, a display, such as a television, home theater display, digital cinema projector, computer video system, or the like, a component of a data transmission fabric, such as a cable television head end, a network access point or the like. Example embodiments provide such apparatus hosting a data processing stage 18 as described herein.”) acquire reproduction content; (ATKINS, para. 0038: “FIG. 1 illustrates an example embodiment in which video data 10 is carried from a video source 12 by a transmission fabric 14 to a plurality of user locations 15.” ATKINS, para. 0039: “Transmission fabric 14 may comprise, for example, the internet, a cable television system, a wireless broadcasting system, media that are distributed such as DVDs, other data storage devices that contain video data, or any other data communication system capable of delivering video data 10 to user locations 15.”; Examiner’s Note (EN): the cable set-top box for the cable television system or a DVD acquire reproduction content in the form of video data 10) acquire information regarding a viewing status; (ATKINS, para. 0055: “In some embodiments, a light sensor 26 associated with the display 16 detects ambient light (intensity and/or color) and communicates ambient light information 27 about the ambient light to data processing stage 18. Data processing stage 18 may then use ambient light information 27 to configure itself to deliver an appropriate video stream 23 to display 16.”) perform an estimate of reproduction content to be output, (ATKINS, para. 0045: “Data processing stage 18 extracts from video data 10 the data required to create a video stream 23 optimized for display on a particular display 16.”; ATKINS, para. 0047: “Data processing stage 18 may comprise, for example a programmed processor executing suitable software and/or firmware instructions, configurable hardware, such as a suitably configured field programmable gate array (FPGA), hardware logic circuits (for example provided in one or more integrated circuits), or suitable combinations thereof. Data processing stage 18 may be hosted in a computer, a video-handling device, such as a transcoder, set-top box, digital video recorder, or the like, a display, such as a television, home theater display, digital cinema projector, computer video system, or the like, a component of a data transmission fabric, such as a cable television head end, a network access point or the like. Example embodiments provide such apparatus hosting a data processing stage 18 as described herein.”; Examiner’s Note (EN): the programmed processor executing suitable hardware and firmware for data processing stage 18 performs the reproduction of content to be output to a display) on a basis of information regarding a user who views the reproduction content and information regarding a creator who has created the reproduction content; and (ATKINS, para. 0055: “As illustrated in FIG. 3, providing video data 10 which includes versions optimized for displaying video content on different displays and/or under different viewing conditions can facilitate maintaining creative control over the way that the video content appears to users. A producer of a video may have a specific creative intent that is intended to be conveyed to users through factors such as the specific palette of colors used in a scene, the contrast of a scene, the maximum brightness of a scene, etc. These and similar factors affect how the scene is perceived by viewers. The creative intent is content and producer-dependent and is very difficult to determine analytically.” ATKINS, para. 0061: “Thus, video data 10 may contain a number of different versions of the same video content each optimized for use on different displays and/or different viewing conditions according to the creative intent of producer P. This permits users to enjoy the video content in a manner that closely matches the producer's creative intent.” ATKINS, para. 0063: “As noted above, a data processing stage 18 may generate video data for a particular display that is based on but different from the base layer and the enhancement layers. In some embodiments, this involves interpolation and/or extrapolation between the base layer and an enhancement layer and/or between two enhancement layers. The interpolation or extrapolation may be based on capabilities of a target display relative to capabilities associated with the two layers.” Examiner’s Note: As disclosed by ATKINS, the data processing stage 18 generates video (corresponding to recited “reproduction content”) based on (1) “capabilities of a target display relative” (corresponding to recited “information regarding a user who views the reproduction content” – as shown in the instant disclosure at para. 0102, Table I, information about the user’s “display device” is considered to be information regarding a user) and (2) “different versions of the same video content each optimized for use on different displays and/or different viewing conditions according to the creative intent of producer P” (corresponding to recited “information regarding a creator who has created the reproduction content”). output the estimated reproduction content. (ATKINS, para. 0049: “For example, the user may have a smart phone 32A, a tablet computer 32B, a basic television 32C and a high-end home theater system 32C all capable of displaying video content. Each of these devices may have different capabilities for video display.”) However, ATKINS fails to explicitly teach: perform signal processing to reduce a recognition gap between the user and the creator with respect to the reproduction content; wherein the recognition gap corresponds to a difference between a signal representation of the reproduction content output to the user and a signal representation of the reproduction content intended by the creator. However, in a related field of endeavor (displaying audio and video signals, see para. 0002), WALLACE teaches: perform signal processing to reduce a recognition gap between the user and the creator with respect to the reproduction content; (WALLACE, para. 0039: “In addition, the SMPTE ST 2086 standard specifies the metadata items to define the color volume (the color primaries, white point, and luminance range) of the display that was used in mastering the video content. This information could be send with an image or scene to inform a consumer display of the characteristics of the mastering display in order to tune itself to recreate the mastering artist's intent originally achieved in the mastering suite.”; WALLACE, para. 0044: “FIG. 3 presents a block diagram representation of a dynamic range converter in accordance with an embodiment of the present disclosure. In particular, an embodiment of dynamic range converter 150 is presented that includes color space converters 205 and 235, linearizer 210 and delinearizer 230, color volume transformer 215, and optional compositor 220, color space converter 225, dithering limiter 240 and display encoder 245. In various embodiments, the color space converters 205 and 235, linearizer 210 and delinearizer 230, color volume transformer 215, and optional compositor 220, color space converter 225, dithering limiter 240 and display encoder 245 are implemented via a plurality of circuits such as a plurality of processing devices. Each processing device may be a microprocessor, micro-controller, digital signal processor, vector processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, digital circuitry, look-up table and/or any device that manipulates digital signals based on hard coding of the circuitry and/or operational instructions.” WALLACE, para. 0047: “The dynamic range converter 150 can provide a generic and reconfigurable architecture to cover a wide variety of HDR standards and proposals. In various embodiments, the source dynamic range, mastering dynamic range and/or the display dynamic range are each independently configurable based on the configuration data 206. The shifters of the linearizer 210 can convert the source dynamic range of the color components into a mastering dynamic range that comports with the dynamic color transform metadata applied by color volume transformer 215 to recreate the artist's intent. In a similar fashion, shifters of the delinearizer 230 can convert the mastering dynamic range of the color components into a display dynamic range that, for example, comports with capabilities of the display encoder 245 and/or a display device that will reproduce the video. In this fashion, the dynamic range converter 150 can selectively operate in a first mode of operation where the source dynamic range is a high dynamic range and the display dynamic range is a standard dynamic range at selectable mastering dynamic range levels.” WALLACE, para. 0071: “In addition, the dynamic color transform metadata 202 can be applied on a frame by frame basis to generate transform color space signals 332 that reflect the artistic intent for each frame.” Examiner’s Note: WALLACE discloses a dynamic range converter 150 that performs signal processing to reduce the difference between the dynamic range of the master (with respect to the creator’s artistic intent) and the capabilities of a display device; the ATKINS-WALLACE combination now applies the dynamic range modifications to the system of ATKINS to optimize displays and/or viewing conditions in ATKINS to align with the creative intent of a producer as in ATKINS) wherein the recognition gap corresponds to a difference between a signal representation of the reproduction content output to the user and a signal representation of the reproduction content intended by the creator. (WALLACE, para. 0039: “In addition, the SMPTE ST 2086 standard specifies the metadata items to define the color volume (the color primaries, white point, and luminance range) of the display that was used in mastering the video content. This information could be send with an image or scene to inform a consumer display of the characteristics of the mastering display in order to tune itself to recreate the mastering artist's intent originally achieved in the mastering suite.”; WALLACE, para. 0044: “FIG. 3 presents a block diagram representation of a dynamic range converter in accordance with an embodiment of the present disclosure. In particular, an embodiment of dynamic range converter 150 is presented that includes color space converters 205 and 235, linearizer 210 and delinearizer 230, color volume transformer 215, and optional compositor 220, color space converter 225, dithering limiter 240 and display encoder 245. In various embodiments, the color space converters 205 and 235, linearizer 210 and delinearizer 230, color volume transformer 215, and optional compositor 220, color space converter 225, dithering limiter 240 and display encoder 245 are implemented via a plurality of circuits such as a plurality of processing devices. Each processing device may be a microprocessor, micro-controller, digital signal processor, vector processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, digital circuitry, look-up table and/or any device that manipulates digital signals based on hard coding of the circuitry and/or operational instructions.” WALLACE, para. 0047: “The dynamic range converter 150 can provide a generic and reconfigurable architecture to cover a wide variety of HDR standards and proposals. In various embodiments, the source dynamic range, mastering dynamic range and/or the display dynamic range are each independently configurable based on the configuration data 206. The shifters of the linearizer 210 can convert the source dynamic range of the color components into a mastering dynamic range that comports with the dynamic color transform metadata applied by color volume transformer 215 to recreate the artist's intent. In a similar fashion, shifters of the delinearizer 230 can convert the mastering dynamic range of the color components into a display dynamic range that, for example, comports with capabilities of the display encoder 245 and/or a display device that will reproduce the video. In this fashion, the dynamic range converter 150 can selectively operate in a first mode of operation where the source dynamic range is a high dynamic range and the display dynamic range is a standard dynamic range at selectable mastering dynamic range levels.” WALLACE, para. 0071: “In addition, the dynamic color transform metadata 202 can be applied on a frame by frame basis to generate transform color space signals 332 that reflect the artistic intent for each frame.” Examiner’s Note: WALLACE discloses a dynamic range converter 150 that performs signal processing to reduce the difference between the dynamic range of the master (with respect to the creator’s artistic intent) and the capabilities of a display device, where there is a difference in the dynamic range of the signals from the master and that are capable of being output to the display device; the ATKINS-WALLACE combination now applies the dynamic range modifications to the system of ATKINS to optimize displays and/or viewing conditions in ATKINS to align with the creative intent of a producer as in ATKINS) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the video display system of ATKINS with the emotional and cognitive content modification models of WALLACE as explained above. As disclosed by WALLACE, one of ordinary skill would have been motivated to do so in order to enable the reduced dynamic range of a display device to match the dynamic range in mastering. (para. 0050). Regarding Claim 2 ATKINS and WALLACE disclose the device of claim 1. ATKINS further teaches: wherein the information regarding the user includes information regarding at least one of a state of the user, a profile of the user, an installation environment of the information processing device, hardware information about the information processing device, and ... (ATKINS, para. 0045: “In some embodiments, a light sensor 26 associated with the display 16 detects ambient light (intensity and/or color) and communicates ambient light information 27 about the ambient light to data processing stage 18. Data processing stage 18 may then use ambient light information 27 to configure itself to deliver an appropriate video stream 23 to display 16.”; ATKINS, para. 0046: “data processing stage 18 accesses previously stored account or user preference information (not shown) to configure itself to deliver an appropriate video stream 23 to display 16. The previously-stored information may, for example, indicate that a particular display is used in a darkened theater. In other embodiments, data processing stage 18 receives user input regarding ambient conditions at the location of a display and uses that user input to configure itself to deliver an appropriate video stream 23 to display 16. The user input may, for example, be accepted at a computer, display, remote control, portable digital device, smart phone or the like and delivered to data processing stage 18 by way of any suitable data communication path.” ATKINS, para. 0063: “As noted above, a data processing stage 18 may generate video data for a particular display that is based on but different from the base layer and the enhancement layers. In some embodiments, this involves interpolation and/or extrapolation between the base layer and an enhancement layer and/or between two enhancement layers. The interpolation or extrapolation may be based on capabilities of a target display relative to capabilities associated with the two layers.”; Examiner’s Note (EN): previous stored account or user preference information corresponds to recited “profile of a user”, ambient light corresponds to recited “installation environment of the information processing device”, capabilities of the target display correspond to recited “hardware information about the information processing device” and data processing stage 18 performs “signal processing to be performed in the information processing device”) However, ATKINS fails to explicitly teach: the signal processing to be performed in the information processing device However, in a related field of endeavor (displaying audio and video signals, see para. 0002), WALLACE teaches: the signal processing to be performed in the information processing device; (WALLACE, para. 0044: “FIG. 3 presents a block diagram representation of a dynamic range converter in accordance with an embodiment of the present disclosure. In particular, an embodiment of dynamic range converter 150 is presented that includes color space converters 205 and 235, linearizer 210 and delinearizer 230, color volume transformer 215, and optional compositor 220, color space converter 225, dithering limiter 240 and display encoder 245. In various embodiments, the color space converters 205 and 235, linearizer 210 and delinearizer 230, color volume transformer 215, and optional compositor 220, color space converter 225, dithering limiter 240 and display encoder 245 are implemented via a plurality of circuits such as a plurality of processing devices. Each processing device may be a microprocessor, micro-controller, digital signal processor, vector processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, digital circuitry, look-up table and/or any device that manipulates digital signals based on hard coding of the circuitry and/or operational instructions.” WALLACE, para. 0047: “The dynamic range converter 150 can provide a generic and reconfigurable architecture to cover a wide variety of HDR standards and proposals. In various embodiments, the source dynamic range, mastering dynamic range and/or the display dynamic range are each independently configurable based on the configuration data 206. The shifters of the linearizer 210 can convert the source dynamic range of the color components into a mastering dynamic range that comports with the dynamic color transform metadata applied by color volume transformer 215 to recreate the artist's intent. In a similar fashion, shifters of the delinearizer 230 can convert the mastering dynamic range of the color components into a display dynamic range that, for example, comports with capabilities of the display encoder 245 and/or a display device that will reproduce the video. In this fashion, the dynamic range converter 150 can selectively operate in a first mode of operation where the source dynamic range is a high dynamic range and the display dynamic range is a standard dynamic range at selectable mastering dynamic range levels.” WALLACE, para. 0071: “In addition, the dynamic color transform metadata 202 can be applied on a frame by frame basis to generate transform color space signals 332 that reflect the artistic intent for each frame.” Examiner’s Note: WALLACE discloses a dynamic range converter 150 that performs signal processing to reduce the difference between the dynamic range of the master (with respect to the creator’s artistic intent) and the capabilities of a display device, where there is a difference in the dynamic range of the signals from the master and that are capable of being output to the display device, where such processing is done by a digital signal processor, for example; the ATKINS-WALLACE combination now applies the digital signal processor of WALLACE to ATKINS to optimize displays and/or viewing conditions in ATKINS to align with the creative intent of a producer as in ATKINS) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the video display system of ATKINS with the emotional and cognitive content modification models of WALLACE as explained above. As disclosed by WALLACE, one of ordinary skill would have been motivated to do so in order to enable the reduced dynamic range of a display device to match the dynamic range in mastering. (para. 0050). Regarding Claim 3 ATKINS and WALLACE disclose the device of claim 1. ATKINS further teaches: wherein the information regarding the user includes information detected by the circuitry. (ATKINS, para. 0045: “In some embodiments, a light sensor 26 associated with the display 16 detects ambient light (intensity and/or color) and communicates ambient light information 27 about the ambient light to data processing stage 18. Data processing stage 18 may then use ambient light information 27 to configure itself to deliver an appropriate video stream 23 to display 16.”) Regarding Claim 4 ATKINS and WALLACE disclose the device of claim 1. ATKINS further teaches: wherein the information regarding the creator includes information regarding at least one of a state of the creator, a profile of the creator, a creation environment of the content, hardware information about a device used in creation of the content, and ... (ATKINS, para. 0055: “As illustrated in FIG. 3, providing video data 10 which includes versions optimized for displaying video content on different displays and/or under different viewing conditions can facilitate maintaining creative control over the way that the video content appears to users. A producer of a video may have a specific creative intent that is intended to be conveyed to users through factors such as the specific palette of colors used in a scene, the contrast of a scene, the maximum brightness of a scene, etc. These and similar factors affect how the scene is perceived by viewers. The creative intent is content and producer-dependent and is very difficult to determine analytically.” ATKINS, para. 0057: “The producer may view video content for base layer 20 on display 26A and use a color grading station 28 to optimize the appearance of the video content on display 26A according to the producer's creative intent. Color grading station 28 may take as input high-quality video data 29. Video data 29 may, for example, have a high dynamic range, wide gamut, and large bit depth.” ATKINS, para. 0059: “The producer may also view the video content for enhancement layer 22B on display 26B under bright ambient conditions and use color grading station 28 to optimize the appearance of the video content for enhancement layer 22B on display 26B according to the producer's creative intent. Color grading station 28 may comprise automated or semi-automated tools to assist the producer in efficiently creating multiple versions of the video content for display on different displays under different ambient conditions.”; ATKINS, para. 0061: “Thus, video data 10 may contain a number of different versions of the same video content each optimized for use on different displays and/or different viewing conditions according to the creative intent of producer P. This permits users to enjoy the video content in a manner that closely matches the producer's creative intent.” Examiner’s Note (EN): the specific palette of colors used in a scene, the contrast of a scene, the maximum brightness of a scene, etc., correspond to the recited “creation environment of the content” and bases layer 20 and enhancement layers 22A-C correspond to the recited “signal processing to be performed when the content is uploaded”) However, ATKINS fails to explicitly teach: the signal processing to be performed when the content is uploaded However, in a related field of endeavor (displaying audio and video signals, see para. 0002), WALLACE teaches: the signal processing to be performed in the information processing device; (WALLACE, para. 0044: “FIG. 3 presents a block diagram representation of a dynamic range converter in accordance with an embodiment of the present disclosure. In particular, an embodiment of dynamic range converter 150 is presented that includes color space converters 205 and 235, linearizer 210 and delinearizer 230, color volume transformer 215, and optional compositor 220, color space converter 225, dithering limiter 240 and display encoder 245. In various embodiments, the color space converters 205 and 235, linearizer 210 and delinearizer 230, color volume transformer 215, and optional compositor 220, color space converter 225, dithering limiter 240 and display encoder 245 are implemented via a plurality of circuits such as a plurality of processing devices. Each processing device may be a microprocessor, micro-controller, digital signal processor, vector processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, digital circuitry, look-up table and/or any device that manipulates digital signals based on hard coding of the circuitry and/or operational instructions.” Examiner’s Note: WALLACE discloses a dynamic range converter 150 that performs signal processing to reduce the difference between the dynamic range of the master (with respect to the creator’s artistic intent) and the capabilities of a display device, where there is a difference in the dynamic range of the signals from the master and that are capable of being output to the display device; the ATKINS-WALLACE combination now applies the digital signal conversions of WALLACE to ATKINS to optimize displays and/or viewing conditions in ATKINS to align with the creative intent of a producer as in ATKINS) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the video display system of ATKINS with the emotional and cognitive content modification models of WALLACE as explained above. As disclosed by WALLACE, one of ordinary skill would have been motivated to do so in order to enable the reduced dynamic range of a display device to match the dynamic range in mastering. (para. 0050). Regarding Claim 5 ATKINS and WALLACE disclose the device of claim 1. ATKINS further teaches: wherein the information regarding the creator includes information corresponding to the information regarding the user. (ATKINS, para. 0057: “The producer may view video content for base layer 20 on display 26A and use a color grading station 28 to optimize the appearance of the video content on display 26A according to the producer's creative intent. Color grading station 28 may take as input high-quality video data 29. Video data 29 may, for example, have a high dynamic range, wide gamut, and large bit depth.” ATKINS, para. 0059: “The producer may also view the video content for enhancement layer 22B on display 26B under bright ambient conditions and use color grading station 28 to optimize the appearance of the video content for enhancement layer 22B on display 26B according to the producer's creative intent. Color grading station 28 may comprise automated or semi-automated tools to assist the producer in efficiently creating multiple versions of the video content for display on different displays under different ambient conditions.”; Examiner’s Note: the information about particular displays 26A and 26B, corresponding to recited “information regarding the creator” also corresponds to the user because these display settings correspond to the display settings for the user’s display devices too). Regarding Claim 6 ATKINS and WALLACE disclose the device of claim 1. ATKINS further teaches: wherein the circuitry is further configured to estimate the signal processing for the reproduction content as control for estimating the reproduction content to be output. (ATKINS, para. 0070: “Data processing stage 18 may derive video stream 23 or file 24 by interpolation and/or extrapolation from video streams corresponding to two or more of base layer 20 and enhancement layers 22. In cases where all of the versions of video content in video data 10 have been individually optimized to preserve a creative intent (for example by suitable color timing), the interpolated or extrapolated values may be expected to preserve that creative intent.”; (EN): the process implementing data processing stage 18 uses base layer 20 and enhancement layers 22 (corresponding to “estimated signal processing for the reproduction content”) and also modifies these layers for output to the user’s device) Regarding Claim 7 ATKINS and WALLACE disclose the device of claim 6. ATKINS further teaches: wherein the signal processing for the reproduction content is a process of associating a video image or sound of the reproduction content recognized by the user with a video image or sound of the reproduction content recognized by the creator. (ATKINS, para. 0070: “Data processing stage 18 may derive video stream 23 or file 24 by interpolation and/or extrapolation from video streams corresponding to two or more of base layer 20 and enhancement layers 22. In cases where all of the versions of video content in video data 10 have been individually optimized to preserve a creative intent (for example by suitable color timing), the interpolated or extrapolated values may be expected to preserve that creative intent.”; (EN): the original base layer 20 and enhancement layers 22 correspond to recited “video image or sound of the reproduction content recognized by the creator” and the interpolated view that is going to be displayed to the user corresponds to the recited “video image or sound of the reproduction content recognized by the user”) Regarding Claim 8 ATKINS and WALLACE disclose the device of claim 6. ATKINS further teaches: wherein the reproduction content includes a video signal, and the signal processing includes at least one of resolution conversion, dynamic range conversion, noise reduction, and gamma processing. (ATKINS, paras. 0029-0036: “One aspect of this invention relates to formats for the delivery of video data. The formats provide a base layer and a number of enhancement layers. Each enhancement layer may provide one or more of the following, for example: ... increased dynamic range; ... increased spatial resolution”) Regarding Claim 10 ATKINS and WALLACE disclose the device of claim 6. ATKINS further teaches: wherein the circuitry is further configured to acquire feedback about the reproduction content output on a basis of the signal processing, and ... (ATKINS, para. 0046: “In other embodiments, data processing stage 18 accesses previously stored account or user preference information (not shown) to configure itself to deliver an appropriate video stream 23 to display 16. The previously-stored information may, for example, indicate that a particular display is used in a darkened theater. In other embodiments, data processing stage 18 receives user input regarding ambient conditions at the location of a display and uses that user input to configure itself to deliver an appropriate video stream 23 to display 16. The user input may, for example, be accepted at a computer, display, remote control, portable digital device, smart phone or the like and delivered to data processing stage 18 by way of any suitable data communication path.”; (EN): the broadest reasonable interpretation of “feedback” includes user settings with respect to image quality and about the user environment as explained in para. 0090 to the instant specification, and ATKINS teaches saving user preferences for configuring the video stream to the display (corresponding to recited user settings) and that the display is used in a darkened theatre (corresponding to user environment)) However, ATKINS fails to explicitly teach: further perform the signal processing on a basis of the feedback However, in a related field of endeavor (displaying audio and video signals, see para. 0002), WALLACE teaches: further perform the signal processing on a basis of the feedback (WALLACE, para. 0044: “FIG. 3 presents a block diagram representation of a dynamic range converter in accordance with an embodiment of the present disclosure. In particular, an embodiment of dynamic range converter 150 is presented that includes color space converters 205 and 235, linearizer 210 and delinearizer 230, color volume transformer 215, and optional compositor 220, color space converter 225, dithering limiter 240 and display encoder 245. In various embodiments, the color space converters 205 and 235, linearizer 210 and delinearizer 230, color volume transformer 215, and optional compositor 220, color space converter 225, dithering limiter 240 and display encoder 245 are implemented via a plurality of circuits such as a plurality of processing devices. Each processing device may be a microprocessor, micro-controller, digital signal processor, vector processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, digital circuitry, look-up table and/or any device that manipulates digital signals based on hard coding of the circuitry and/or operational instructions.” WALLACE, para. 0047: “The dynamic range converter 150 can provide a generic and reconfigurable architecture to cover a wide variety of HDR standards and proposals. In various embodiments, the source dynamic range, mastering dynamic range and/or the display dynamic range are each independently configurable based on the configuration data 206. The shifters of the linearizer 210 can convert the source dynamic range of the color components into a mastering dynamic range that comports with the dynamic color transform metadata applied by color volume transformer 215 to recreate the artist's intent. In a similar fashion, shifters of the delinearizer 230 can convert the mastering dynamic range of the color components into a display dynamic range that, for example, comports with capabilities of the display encoder 245 and/or a display device that will reproduce the video. In this fashion, the dynamic range converter 150 can selectively operate in a first mode of operation where the source dynamic range is a high dynamic range and the display dynamic range is a standard dynamic range at selectable mastering dynamic range levels.” WALLACE, para. 0071: “In addition, the dynamic color transform metadata 202 can be applied on a frame by frame basis to generate transform color space signals 332 that reflect the artistic intent for each frame.” Examiner’s Note: WALLACE discloses a dynamic range converter 150 that performs signal processing to reduce the difference between the dynamic range of the master (with respect to the creator’s artistic intent) and the capabilities of a display device, where there is a difference in the dynamic range of the signals from the master and that are capable of being output to the display device, where such processing is done by a digital signal processor, for example; the ATKINS-WALLACE combination now applies the digital signal processor of WALLACE to ATKINS to optimize displays and/or viewing conditions in ATKINS to align with the creative intent of a producer as in ATKINS using the feedback as in ATKINS) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the video display system of ATKINS with the emotional and cognitive content modification models of WALLACE as explained above. As disclosed by WALLACE, one of ordinary skill would have been motivated to do so in order to enable the reduced dynamic range of a display device to match the dynamic range in mastering. (para. 0050). Claim 18 recites an information processing method that corresponds to the device of claim 1 and is therefore rejected for the same reasons explained above with respect to claim 1. Claim 21 depends from claim 18 and claims a method that corresponds to the information processing device of claim 2, and is therefore rejected for the same reasons explained above with respect to claims 2 and 18. Claim 22 depends from claim 18 and claims a method that corresponds to the information processing device of claim 3, and is therefore rejected for the same reasons explained above with respect to claims 3 and 18. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over ATKINS in view of WALLACE and further in view of US 20200037091 A1, hereinafter JEON. Regarding Claim 9 ATKINS and WALLACE disclose the device of claim 6. However, ATKINS and WALLACE fail to explicitly teach: wherein the reproduction content includes an audio signal, and the signal processing includes at least one of band extension and sound localization. However, in a related field of endeavor (signal processing for head mounted displays, para. 0001), JEON teaches: wherein the reproduction content includes an audio signal, and the signal processing includes at least one of band extension and sound localization. (JEON, para. 0081: “Meanwhile, in the case of a binaural rendered audio signal, the performance of the sound localization that defines the location of the sound incident to the front or rear of the listener may be reduced. ... The audio signal processing apparatus 100 according to an embodiment may generate an output audio signal by modeling frequency characteristics of a transfer function corresponding to each of the at least one reflected sound. Accordingly, the audio signal processing apparatus 100 may efficiently increase the front-rear sound localization performance in terms of calculation amount.”; (EN): in combination with ATKINS and WALLACE, the data processing stage 18 of ATKINS now includes an audio signal processing apparatus as in JEON to perform sound localization on an audio signal that accompanies the video signal of ATKINS) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the video display system of ATKINS with the teachings of WALLACE and JEON as explained above. As disclosed by JEON, one of ordinary skill would have been motivated to do so because JEON teaches techniques for “reproducing the spatial sound in which an interactive of a user is reflected by using a relatively small amount of computation.” (para. 0006). Moreover, one of ordinary skill would further be motivated to do so because JEON teaches that “in the case of a binaural rendered audio signal, the performance of the sound localization that defines the location of the sound incident to the front or rear of the listener may be reduced,” and JEON teaches ways so that the listener can more easily distinguish the sound’s source. (para. 0081). Claims 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over ATKINS in view of WALLACE and further in view of US 20190075301 A1, hereinafter referenced as CHOU. Regarding Claim 11 ATKINS and WALLACE disclose the device of claim 6. However, ATKINS and WALLACE fail to explicitly teach: wherein the circuitry is further configured to further acquire a learning model that is generated on a basis of the information regarding the user and the information regarding the creator, and the circuitry is further configured to estimate the signal processing on a basis of the learning model. However, in a related field of endeavor (video processing, see para. 0002), CHOU teaches: wherein the circuitry is further configured to further acquire a learning model that is generated on a basis of the information regarding the user and the information regarding the creator, and the circuitry is further configured to estimate the signal processing on a basis of the learning model. (CHOU, para. 0084: “an example of a convolutional neural network block 34A, which may be implemented as a machine learning block 34, is shown in FIG. 7. Generally, the convolutional neural network block 34A includes one or more convolution layers 66, which each implements convolution weights 68, connected via layer interconnections 71.” CHOU, para. 0140: “In any case, the machine learning block 34 may be trained to determine and implement machine learning parameters 64 (process block 150). In some embodiments, the machine learning block 34 may be trained offline, for example, by implementing the process 86 of FIG. 9 based on training image data and actual video quality corresponding with the training image data. Additionally or alternatively, the machine learning block 34 may be trained online, for example, by feeding back actual video quality resulting from display of decoded image data 126.”; CHOU, para. 0152: “To facilitate improving transcoding, it is desirable to transcode a bit stream using a lower bitrate, possible lower resolution and lower frame rate. By analyzing using machine learning techniques, characteristics of a bit stream may be determined and, for example, indicated in a metadata file and/or derived at the decoder stage of the transcoder. Given analysis of the bit stream and/or other relevant conditions (e.g., network information, display/device capabilities, current power usage, resource information, applications running on the device, and/or expected user streaming behaving), a convolutional neural network may determine “coding control” parameters for encoding the video. These could include QP for the frame or regions, GOP structure, intra frames, lambda parameters for mode decision and motion estimation (e.g., if used by transcoder), downscaling ratio as well as filters for the downscaling. In some embodiments, a convolutional neural network may be implemented to facilitate performing motion compensated temporal filtering, which at least in some instances may further assist with compression through pre-filtering.” Examiner’s Note: CHOU teaches that a convolutional neural network (corresponding to recited “learning model”) is trained using source images (corresponding to “information regarding the creator” because these are the original files prepared by the creator) and actual video output, which can be live (corresponding to “information regarding the user” because the actual video output is by the user’s device); in combination with ATKINS and WALLACE, the data processing stage 18 of ATKINS now utilizes the CNN of CHOU when transcoding the source video files to the video files for the target computer). Before the effective filing date of the present application, one of ordinary skill in the art would have been motivated to combine the video processing system of ATKINS with teachings of WALLACE and CHOU as explained above. As disclosed by CHOU, one of ordinary skill would have been motivated to do so because CHOU teaches that “varying the encoding parameters may affect encoding efficiency (e.g., size of encode image data and/or encoding throughput), decoding efficiency (e.g., decoding throughput), and/or video quality expected to result when corresponding decoded image data is used to display an image.” (para. 0006). CHOU further discloses that video quality may “be improved by leveraging machine learning techniques” such as CNNs. (CHOU, paras. 0015, 0084). One of ordinary skill would further be motivated to do so, because when streaming video content over the Internet (which is contemplated by ATKINS at paras. 0039, 0068), one of ordinary skill would understand that video content needs to be transcoded with respect to the user’s target device, for example, the device may only be available to display video in HD but the source copy is in 4K, or the device may only have standard dynamic range, but the source copy was mastered in Dolby Vision or HDR. Regarding Claim 12 ATKINS, WALLACE, and CHOU disclose the device of claim 11. However, ATKINS and WALLACE fail to explicitly teach: wherein the learning model includes a set of coupling weight coefficients between neurons in a neural network, and the circuitry is further configured to estimate the signal processing on a basis of a neural network in which a coupling weight coefficient included in the learning model is set. However, in a related field of endeavor (video processing, see para. 0002), CHOU teaches: wherein the learning model includes a set of coupling weight coefficients between neurons in a neural network, and (CHOU, para. 0046: “ For example, when the machine learning block implements convolutional neural network (CNN) techniques, the machine learning parameters may indicate number of convolution layers, inter-connections between layers, and/or convolution weights (e.g., coefficients) corresponding to each convolution layer.”; (EN): the inter-connections between layers, and/or convolution weights corresponds to the recited “coupling weight coefficients between neurons in a neural network”) the circuitry is further configured to estimate the signal processing on a basis of a neural network in which a coupling weight coefficient included in the learning model is set. (CHOU, para. 0046: “To facilitate identifying characteristics of an image, a machine learning block may be trained to determine and implement machine learning parameters. For example, when the machine learning block implements convolutional neural network (CNN) techniques, the machine learning parameters may indicate number of convolution layers, inter-connections between layers, and/or convolution weights (e.g., coefficients) corresponding to each convolution layer.”; CHOU, para. 00140: “the machine learning block 34 may be trained offline”; (EN): the inter-connections between layers, and/or convolution weights corresponds to the recited “coupling weight coefficients” and these connections are pre-trained or fixed (corresponding to recited “set”) because such training can occur offline). Before the effective filing date of the present application, one of ordinary skill in the art would have been motivated to combine the video processing system of ATKINS with the teachings of WALLACE and CHOU as explained above. As disclosed by CHOU, one of ordinary skill would have been motivated to do so because CHOU teaches that “varying the encoding parameters may affect encoding efficiency (e.g., size of encode image data and/or encoding throughput), decoding efficiency (e.g., decoding throughput), and/or video quality expected to result when corresponding decoded image data is used to display an image.” (para. 0006). CHOU further discloses that video quality may “be improved by leveraging machine learning techniques” such as CNNs. (CHOU, paras. 0015, 0084). One of ordinary skill would further be motivated to do so, because when streaming video content over the Internet (which is contemplated by ATKINS at paras. 0039, 0068), one of ordinary skill would understand that video content needs to be transcoded with respect to the user’s target device, for example, the device may only be available to display video in HD but the source copy is in 4K, or the device may only have standard dynamic range, but the source copy was mastered in Dolby Vision or HDR. Regarding Claim 13 ATKINS, WALLACE, and CHOU disclose the device of claim 12. However, ATKINS and WALLACE fail to explicitly teach: wherein the learning model includes a set of coupling weight coefficients between neurons in a neural network that learns a correlation to reproduction content signal processing corresponding to a combination of the reproduction content, the information regarding the user, and the information regarding the creator, and the circuitry is further configured to perform an estimation of the signal processing corresponding to the combination of the reproduction content, the information regarding the user, and the information regarding the creator, on a basis of a neural network in which a coupling weight coefficient included in the learning model is set. However, in a related field of endeavor (video processing, see para. 0002), CHOU teaches: wherein the learning model includes a set of coupling weight coefficients between neurons in a neural network that learns a correlation to reproduction content signal processing corresponding to a combination of the reproduction content, the information regarding the user, and the information regarding the creator, and (CHOU, para. 0084: “an example of a convolutional neural network block 34A, which may be implemented as a machine learning block 34, is shown in FIG. 7. Generally, the convolutional neural network block 34A includes one or more convolution layers 66, which each implements convolution weights 68, connected via layer interconnections 71.” CHOU, para. 0140: “In any case, the machine learning block 34 may be trained to determine and implement machine learning parameters 64 (process block 150). In some embodiments, the machine learning block 34 may be trained offline, for example, by implementing the process 86 of FIG. 9 based on training image data and actual video quality corresponding with the training image data. Additionally or alternatively, the machine learning block 34 may be trained online, for example, by feeding back actual video quality resulting from display of decoded image data 126.”; Examiner’s Note: the CNN is trained (updating its weights) using training image data (corresponding to the recited “reproduction content”), the actual video quality (which as intended, corresponds to the “information regarding the creator” because that’s the video quality the creator intends the viewer to see), and the feeding back of actual video quality from the display (corresponding to the recited “information regarding the user”)) the circuitry is further configured to perform an estimation of the signal processing corresponding to the combination of the reproduction content, the information regarding the user, and the information regarding the creator, on a basis of a neural network in which a coupling weight coefficient included in the learning model is set. (CHOU, para. 0152: “To facilitate improving transcoding, it is desirable to transcode a bit stream using a lower bitrate, possible lower resolution and lower frame rate. By analyzing using machine learning techniques, characteristics of a bit stream may be determined and, for example, indicated in a metadata file and/or derived at the decoder stage of the transcoder. Given analysis of the bit stream and/or other relevant conditions (e.g., network information, display/device capabilities, current power usage, resource information, applications running on the device, and/or expected user streaming behaving), a convolutional neural network may determine “coding control” parameters for encoding the video. These could include QP for the frame or regions, GOP structure, intra frames, lambda parameters for mode decision and motion estimation (e.g., if used by transcoder), downscaling ratio as well as filters for the downscaling. In some embodiments, a convolutional neural network may be implemented to facilitate performing motion compensated temporal filtering, which at least in some instances may further assist with compression through pre-filtering.” Examiner’s Note: in combination with ATKINS and WALLACE, the data processing stage 18 of ATKINS now utilizes the CNN of CHOU when transcoding the source video files to the video files for the target computer). Before the effective filing date of the present application, one of ordinary skill in the art would have been motivated to combine the video processing system of ATKINS with teachings of WALLACE and CHOU as explained above. As disclosed by CHOU, one of ordinary skill would have been motivated to do so because CHOU teaches that “varying the encoding parameters may affect encoding efficiency (e.g., size of encode image data and/or encoding throughput), decoding efficiency (e.g., decoding throughput), and/or video quality expected to result when corresponding decoded image data is used to display an image.” (para. 0006). CHOU further discloses that video quality may “be improved by leveraging machine learning techniques” such as CNNs. (CHOU, paras. 0015, 0084). One of ordinary skill would further be motivated to do so, because when streaming video content over the Internet (which is contemplated by ATKINS at paras. 0039, 0068), one of ordinary skill would understand that video content needs to be transcoded with respect to the user’s target device, for example, the device may only be available to display video in HD but the source copy is in 4K, or the device may only have standard dynamic range, but the source copy was mastered in Dolby Vision or HDR. Claims 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over ATKINS in view of WALLACE and further in view of US 20170209804 A1, hereinafter referenced as KIM. Regarding Claim 14 ATKINS and WALLACE disclose the device of claim 1. However, ATKINS and WALLACE fail to explicitly teach: wherein the circuitry is further configured to control an external device to output a scene-producing effect corresponding to the reproduction content. However, in a related field of endeavor (a chair that “operates during a movie screening to release scent into the audience and impart olfactory effects”, para. 0001), KIM teaches: wherein the circuitry is further configured to control an external device to output a scene-producing effect corresponding to the reproduction content. (KIM, para. 0072: “The communication control unit 220 receives a scent on/off signal through a corresponding communication line (e.g., RS-485, CAN, etc.). The communication control unit 220 operates a switch signal based on the received scent on/off signal. Here, the scent on/off signal may be programmed by being synchronized with images screened in a movie theater. In this case, the communication control unit 220 may receive the scent on/off signal synchronized with the 4D images. That is, the communication control unit 220 may be applied to not only the scent on/off signal applied to the 4D field, but also various fields without being limited to a particular field as described above.”); KIM, para. 0107: “To this end, the scent-generating apparatus 200 basically includes the scent-generating unit 210, the communication control unit 220, and the multi-tube 230. The scent-generating apparatus 200 may further include a plurality of nozzles 810 provided on the experiential chairs 100. The plurality of nozzles 810 sprays the scent, thereby stimulating olfactory senses of the audience. The plurality of nozzles 810 may be provided on the special-effect experiential chairs 100, respectively.” (EN): KIM teaches an “experiential chair” that sprays a scent to the viewer in the audio, where such scent is synchronized with images in the movie (corresponding to the “output a scene-producing effect corresponding to the reproduction content”) Before the effective filing date of the present application, one of ordinary skill would have been motivated to combine the video processing system of ATKINS with the teachings of WALLACE and KIM as explained above. As disclosed by KIM, one of ordinary skill would have been motivated to do so in order to “release scent into the audience and impart olfactory effects, thereby maximizing the degree of immersion of the audience.” (para. 0001). One of ordinary skill would have been motivated to do so in order to improve the 4-D experience of a viewer. (para. 0014). Regarding Claim 15 ATKINS, WALLACE, and KIM disclose the device of claim 14. However, ATKINS and WALLACE fail to explicitly teach: wherein the external device includes an effect producing device that outputs a scene-producing effect, and the circuitry is further configured to control the effect producing device, on a basis of the information regarding the user and the information regarding the creator detected by the circuitry. However, in a related field of endeavor (a chair that “operates during a movie screening to release scent into the audience and impart olfactory effects”, para. 0001), KIM teaches: wherein the external device includes an effect producing device that outputs a scene-producing effect, and the circuitry is further configured to control the effect producing device, on a basis of the information regarding the user and the information regarding the creator detected by the circuitry. (KIM, para. 0072: “The communication control unit 220 receives a scent on/off signal through a corresponding communication line (e.g., RS-485, CAN, etc.). The communication control unit 220 operates a switch signal based on the received scent on/off signal. Here, the scent on/off signal may be programmed by being synchronized with images screened in a movie theater. In this case, the communication control unit 220 may receive the scent on/off signal synchronized with the 4D images. That is, the communication control unit 220 may be applied to not only the scent on/off signal applied to the 4D field, but also various fields without being limited to a particular field as described above.”); KIM, para. 0107: “To this end, the scent-generating apparatus 200 basically includes the scent-generating unit 210, the communication control unit 220, and the multi-tube 230. The scent-generating apparatus 200 may further include a plurality of nozzles 810 provided on the experiential chairs 100. The plurality of nozzles 810 sprays the scent, thereby stimulating olfactory senses of the audience. The plurality of nozzles 810 may be provided on the special-effect experiential chairs 100, respectively.” (EN): KIM teaches an “experiential chair” that sprays a scent to the viewer in the audio, where such scent is synchronized with images in the movie; in combination with ATKINS and WALLACE, the video processing system of ATKINS now transmits the video to the experiential chair of KIM, where the scent is synchronized with the image (corresponding to the recited “information regarding the creator” because a creator did the synchronization) and in view of knowing the general vicinity of the user’s nose in relation to the nozzle (corresponding to the recited “information regarding the user”), where the ambient light sensor of ATKINS is used to confirm that the room is dark and the movie is playing) Before the effective filing date of the present application, one of ordinary skill would have been motivated to combine the video processing system of ATKINS with the teachings of WALLACE and KIM as explained above. As disclosed by KIM, one of ordinary skill would have been motivated to do so in order to “release scent into the audience and impart olfactory effects, thereby maximizing the degree of immersion of the audience.” (para. 0001). One of ordinary skill would have been motivated to do so in order to improve the 4-D experience of a viewer. (para. 0014). Regarding Claim 16 ATKINS, WALLACE, and KIM disclose the device of claim 15. However, ATKINS and WALLACE fail to explicitly teach: wherein the effect producing device includes an effect producing device that uses at least one of wind, temperature, water, light, scent, smoke, and physical movement. However, in a related field of endeavor (a chair that “operates during a movie screening to release scent into the audience and impart olfactory effects”, para. 0001), KIM teaches: wherein the effect producing device includes an effect producing device that uses at least one of wind, temperature, water, light, scent, smoke, and physical movement. (KIM, para. 0107: “To this end, the scent-generating apparatus 200 basically includes the scent-generating unit 210, the communication control unit 220, and the multi-tube 230. The scent-generating apparatus 200 may further include a plurality of nozzles 810 provided on the experiential chairs 100. The plurality of nozzles 810 sprays the scent, thereby stimulating olfactory senses of the audience. The plurality of nozzles 810 may be provided on the special-effect experiential chairs 100, respectively.” (EN): KIM teaches an “experiential chair” that sprays a scent to the viewer in the audio, where such scent is synchronized with images in the movie; in combination with ATKINS and WALLACE, the video processing system of ATKINS now transmits the video to the experiential chair of KIM, where the scent is synchronized with the image as in KIM). Before the effective filing date of the present application, one of ordinary skill would have been motivated to combine the video processing system of ATKINS with the teachings of WALLACE and KIM as explained above. As disclosed by KIM, one of ordinary skill would have been motivated to do so in order to “release scent into the audience and impart olfactory effects, thereby maximizing the degree of immersion of the audience.” (para. 0001). One of ordinary skill would have been motivated to do so in order to improve the 4-D experience of a viewer. (para. 0014). Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over ATKINS in view of WALLACE and KIM and further in view of US 20180232606 A1, hereinafter referenced as PARK. Regarding Claim 17 ATKINS, WALLACE, and KIM disclose the device of claim 14. However, ATKINS, WALLACE, and KIM fail to explicitly teach: wherein the circuitry is further configured to acquire a learning model that is generated on a basis of the information regarding the user and the information regarding the creator, the learning model being of a control process for the effect producing device, and the circuitry is further configured to estimate a process of controlling the effect producing device, on a basis of the learning model. However, in a related field of endeavor (4-D movies, see para. 0003), PARK teaches: wherein the circuitry is further configured to acquire a learning model that is generated on a basis of the information regarding the user and the information regarding the creator, the learning model being of a control process for the effect producing device, and the circuitry is further configured to estimate a process of controlling the effect producing device, on a basis of the learning model. (PARK, para. 0048: “The neural network learning model-based sensory information providing apparatus according to the present disclosure may learn (train) a sensory effect extraction model from training data (i.e., the plurality of videos and the sensory effect meta information of the plurality of videos) by using a neural network learning model used in deep learning, and automatically extract the sensory information for an input video stream through the learned (trained) model.” PARK, para. 0057: “The training data of the deep learning model may include sensory effect types, sensory effect durations, sensory effect attributes, and sensory effect supplementary information (e.g., intensity, position, direction, color, etc.), and the like. Also, the task of constructing the learning data set may be performed based on user's determination through analysis of 4D movies and videos.”; Examiner’s Note: PARK teaches using a neural network (corresponding to the recited “learning model”) that is trained using videos and sensory effect meta information (corresponding to recited “information regarding the creator”) and is trained with respect to the position and direction of the viewer (corresponding to recited “information regarding the user”); in combination with ATKINS, WALLACE, and KIM, the video processing system of ATKINS, as modified by KIM with respect to an experiential chair and scent sprayer, now uses the trained neural network of PARK to learn when to spray a scent, and the direction and intensity in which to spray the scent at the user) Before the effective filing date of the present application, one of ordinary skill would have been motivated to combine the video processing system of ATKINS with the teachings of WALLACE, KIM, and PARK as explained above. As disclosed by PARK, one of ordinary skill would have been motivated to do so to solve that “problem that the production of sensory media is not actively performed due to an increase in production cost and an increase in production time.” Further, as disclosed by PARK, there is a “MPEG-V international standard sensory effect metadata” (see para. 0003), so one of ordinary skill would further be motivated to apply the sensory data contemplated by MPEG to improve the 4D experience. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20200227052 A1 (Breebaart). “Ideally, to ensure that artistic intent is conveyed correctly to the listener, presentations are rendered or generated for specific playback modalities. For headphones playback, this implies the application of HRIRs or BRIRs to create a binaural presentation, while for loudspeakers, amplitude panning techniques are commonly used. Such rendering process can thus be applied to channel-based input content (5.1, 7.1 and alike), as well as to immersive, object-based content such as Dolby Atmos. For the latter, amplitude panning (for loudspeaker presentations) or BRIRs (for headphone presentations) are typically used on every input object independently, followed by summation of the individual object contributions to the resulting binaural signal.” (para. 0007). US 20200322743 A1 (Cengarle). “If the extracted object were detected as violating an artistic intention, using either the embodiments of FIG. 5 or 6 to preserve the artistic intention would neutralise the object extraction itself. For example, the extracted object might be left without signal by applying the embodiment of FIGS. 3a-c if the fraction to be extracted is zero. In such cases, and also in other cases, it may be desirable to perform object extraction again, in order to extract the next significant components. In order to do so, the following strategy may be used: [0174] 1) Once an object is detected as potentially violating artistic intention, obtain its multichannel version by applying the panning parameters (set of energy levels) computed when extracting the audio object. In other words, use the first set of energy levels for rendering the audio object to a second plurality of channels in the first configuration [0175] 2) subtract audio components of the second plurality of channels from audio components of the first plurality of channels, and obtaining a time frame of a third multichannel audio signal (i.e., a difference signal). [0176] 3) Then, run again object extraction on the difference signal. In other words, extract at least one further audio object from the time frame of the third multichannel audio signal, wherein the further audio object being extracted from a specific subset of the plurality of channels of the third multichannel audio signal. [0177] 4) Apply any embodiment described above to detect violation of artistic intention of each of the extracted further audio objects, in which case any of the embodiments for artistic preservations described above is applied, and re-iterate from step 1) until a certain stop criterion is met.” (paras. 0173-0177). Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C LEE whose telephone number is (571)272-4933. The examiner can normally be reached M-F 12:00 pm - 8:00 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL C. LEE/Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Apr 15, 2022
Application Filed
Apr 06, 2025
Non-Final Rejection — §103
Aug 14, 2025
Response Filed
Aug 29, 2025
Final Rejection — §103
Dec 03, 2025
Response after Non-Final Action
Jan 07, 2026
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Jan 15, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603081
METHOD AND SERVER FOR A TEXT-TO-SPEECH PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12602605
QUANTUM COMPUTER ARCHITECTURE BASED ON MULTI-QUBIT GATES
2y 5m to grant Granted Apr 14, 2026
Patent 12591915
METHODS AND SYSTEMS FOR DETERMINING RECOMMENDATIONS BASED ON REAL-TIME OPTIMIZATION OF MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12585743
INTERFACE ACCESS PROCESSING METHOD, COMPUTER DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12568935
AI-BASED LIVESTOCK MANAGEMENT SYSTEM AND LIVESTOCK MANAGEMENT METHOD THEREOF
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
86%
With Interview (+27.1%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month