Prosecution Insights
Last updated: April 19, 2026
Application No. 18/714,934

DATA PROCESSING AND DECODING METHODS, MOBILE AND CONTROL TERMINALS, ELECTRONIC SYSTEM, AND MEDIUM

Final Rejection §101§103
Filed
May 30, 2024
Examiner
ALKIRSH, AHMED
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BOE TECHNOLOGY GROUP CO., LTD.
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
23 granted / 43 resolved
+1.5% vs TC avg
Strong +54% interview lift
Without
With
+53.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
63 currently pending
Career history
106
Total Applications
across all art units

Statute-Specific Performance

§101
20.2%
-19.8% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
22.5%
-17.5% vs TC avg
§112
2.8%
-37.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 43 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-3, 5, 7-8, 10-16, 18-21, 23 and 28-29 of U.S. Application No. 18/714,934 filed on 05/30/2024 were examined. Examiner filed a non-final office action on 10/01/2025. Applicant filed remarks and amendments on 01/02/2026. Claims 1, 3,5, 7-8, 16 and 18 were amended. Claims 4, 6, 9, 15, 17, 22 and 24-27 are cancelled. Claims 1-3, 5, 7-8, 10-14, 16, 18-21, 23 and 28-29 are presently pending examination. Response to Arguments Regarding the claim rejections under 35 USC 101: Applicant's arguments filed 11/28/22 have been fully considered but they are not persuasive. Regarding claims 1 and 16, applicant argues that, Applicant further argues integration into a practical application via elements like “acquiring combined data transmitted by a mobile terminal through a same image transmission channel,” which reduce channels, save costs, ensure synchrony, and solve multi-UAV problems. However, this argument is not persuasive Claims 1-3, 5, 7-8, 10-14, 16, 18-21, 23, 28-29 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. The claims recite mental processes (e.g., acquiring, parsing, decoding data; generating/combining/transmitting data), performable in the mind or with pen/paper, but for generic components (control/mobile terminals). This is an abstract idea. The claims do not integrate into a practical application; additional elements (e.g., transmission channels) are generic, applying the idea with computers as tools, without technological improvement. Purported benefits (reduced channels, synchrony) improve the idea, not technology. The claims do not amount to significantly more; elements are WURC data handling, as evidenced by specification admissions and prior art (e.g., He [0058]: “A sub-picture property SEI message refers to a SEI message including sub-picture property information that may indicate (and/or define properties or indicated defined properties of) one or more sub-pictures or tile groups across multiple layers or representations associated with the same source content region. The sub-picture property information may include one or more additional indicators to indicate one or more recommended ARC switching points. The sub-picture property SEI message or like-type SEI message may include the indicators to indicate one or more recommended ARC switching points. The sub-picture property information may include one or more actual ARC switching points, e.g., to achieve better reconstructed picture quality or apply certain constraints. The sub-picture property SEI message or like-type SEI message may include the one or more actual ARC switching points.”; [0076] describing syntax for parsing sub-picture info: “Table 1 provides an example of a sub-picture property SEI message syntax structure. Table 1 lists the number of sphere regions (viewports) and the sub-pictures covering the same sphere region. Table 1 also provides the coordinates of each repositioned sub-picture relative to the conformance cropping window specified by an active Sequence Parameter Set (SPS). num_source_content_regions_minus1 plus 1 may specify the number of source content region that are specified by the SEI message. source_content_region_position may specify the position of i-th source content region.”). Regarding the claim rejections under 35 USC 102: Applicant's arguments filed 01/02/2026 have been fully considered but they are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5, 7-8, 10-14, 16, 18-21, 23 and 28-29 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claimed invention is directed to the concept of obtaining and processing image data. This judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception and do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The Examiner will further explain in view of the Revised Patent Subject Matter Eligibility Guidance: The claim recites a series of steps and therefore is directed to a process. Therefore, claim 1 is within at least one of the four statutory categories. 101 Analysis – Step 2A, Prong I Regarding Prong I of the Step 2A analysis, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claims 1, 5 and 16 include limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claims 1 and 16, recites: A data processing method, applied to a control terminal and comprising: acquiring first combined data transmitted from a first mobile terminal through a first image transmission channel, wherein the first combined data is data obtained by combining first image data captured by the first mobile terminal and first mobile terminal information corresponding to the first mobile terminal; parsing the first combined data to obtain the first image data and the first mobile terminal information, wherein the first image data is used for displaying; in response to the first mobile terminal information, generating first control terminal control information for controlling the first mobile terminal; and transmitting, through a second image transmission channel, the first control terminal control information to the first mobile terminal to control the first mobile terminal. The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “acquiring, parsing, , generating, , displaying …” in the context of this claim encompasses a person looking at data collected and forming a simple judgement. Accordingly, the claim recites at least one abstract idea. 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): An electronic device, comprising: a processor and a memory; wherein the memory stores computer-executable instructions; when the processor executes the computer-executable instructions stored in the memory, the processor is caused to perform the data processing method according to claim 5. For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitations of “Memory, Processor” the examiner submits that these limitations are an attempt to generally link additional elements to a technological environment. In particular, the acquiring , parsing, generating, displaying by a processor is recited at a high level of generality and merely automates the determining steps, therefore acting as a generic computer to perform the abstract idea. The processor is claimed generically and is operating in its ordinary capacity and does not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. The additional limitation is no more than mere instructions to apply the exception using a computer processor. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B Regarding Step 2B of the Revised Guidance, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “Memory, processor” amounts to nothing more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Hence, the claim is not patent eligible. Dependent claims 2-3, 7-8, 10-14, 16, 18-21, 23 and 28-29 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-3, 7-8, 10-14, 16, 18-21, 23 and 28-29 are not patent eligible under the same rationale as provided for in the rejection of Claims 1 and 16. Therefore, claims 1-3, 5, 7-8, 10-14, 16, 18-21, 23 and 28-29 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 7-8, 10-14, 16, 18-21, 23 and 28-29 are rejected under 35 U.S.C. 103 as being unpatentable over He et al. (US20240305813A1) in view of GONG MING et al. (WO2016154947A1), hereinafter referred to as He and GONG respectively. Regarding Claim 1, He discloses A decoding method, applied to a control terminal and comprising: acquiring combined data transmitted by a mobile terminal through a same image transmission channel ([0092]: “A sub-picture property SEI message refers to a SEI message including sub-picture property information that may indicate (and/or define properties or indicated defined properties of) one or more sub-pictures or tile groups across multiple layers or representations associated with the same source content region.”) “parsing the combined data that is received, (“A 160 MHz channel may be formed by combining 8 contiguous 20 MHz channels, or by combining two non-contiguous 80 MHz channels, which may be referred to as an 80+80 configuration. For the 80+80 configuration, the data, after channel encoding, may be passed through a segment parser that may divide the data into two streams.” [0058]); in response to obtaining, by the parsing, an extended data bit string corresponding to the mobile terminal information, parsing extended data encoding information corresponding to the mobile terminal information from the combined data, and decoding the extended data encoding information to obtain the mobile terminal information (“The middle box 210 may generate, forward, identify, or parse a high-level syntax of input video bitstream(s), extract a sub-bitstream from one input video bitstream, and/or output the extracted sub-bitstream to the client or decoder 212. The middle box 210 may extract multiple sub-bitstreams from multiple input video bitstreams and combine them together to form a new output video bitstream delivering to the client or decoder 212.” [0076]). He does not explicitly teach wherein the combined data is data obtained by combining image data captured by the mobile terminal and mobile terminal information corresponding to the mobile terminal on the mobile terminal. However, GONG does teach wherein the combined data is data obtained by combining image data captured by the mobile terminal and mobile terminal information corresponding to the mobile terminal on the mobile terminal (“The payload may be an image capturing device, and the flight regulations may dictate when and when the image capturing device may be capturing images, transmitting the images, and/or storing the images.” [0152]) combining on UAV with captured image and information. mobile terminal information (“The UAV may have one or more sensors. The UAV may comprise one or more vision sensors such as an image sensor… The UAV may further comprise other sensors that may be used to determine a location of the UAV, such as global positioning system (GPS) sensors, inertial sensors which may be used as part of or separately from an inertial measurement unit (IMU) (e.g., accelerometers, gyroscopes, magnetometers), lidar, ultrasonic sensors, acoustic sensors, WiFi sensors.”[0123]) (state data). Both He and GONG teach methods for obtaining and decoding mobile terminal information. However, GONG explicitly teaches wherein the combined data is data obtained by combining image data captured by the mobile terminal and mobile terminal information corresponding to the mobile terminal on the mobile terminal. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the monitoring method of He to also include wherein the combined data is data obtained by combining image data captured by the mobile terminal and mobile terminal information corresponding to the mobile terminal on the mobile terminal, as taught by GONG, with a reasonable expectation of success. Doing so improves safety for operating a UAVs (With regard to this reasoning, see at least [GONG, 0123 and 0152]). Regarding Claim 2, He discloses The decoding method according to claim 1, further comprising: parsing image encoding information corresponding to the image data from the combined data (“The middle box 210 may extract multiple sub-bitstreams from multiple input video bitstreams and combine them together to form a new output video bitstream delivering to the client or decoder 212.” [0076]); and decoding the image encoding information to obtain the image data(“The middle box 210 may generate, forward, identify, or parse a high-level syntax of input video bitstream(s), extract a sub-bitstream from one input video bitstream, and/or output the extracted sub-bitstream to the client or decoder 212.” [0076]). Regarding Claim 3, He discloses The decoding method according to claim 1, He does not explicitly disclose the mobile terminal is an unmanned aerial vehicle, and the mobile terminal information comprises information generated by the unmanned aerial vehicle. However, GONG does teach the mobile terminal is an unmanned aerial vehicle, and the mobile terminal information comprises information generated by the unmanned aerial vehicle (“The communications may be direct communications between the UAV and the remote device. Examples of direct communications may include WiFi, WiMax, radiofrequency, infrared, visual, or other types of direct communications.” [0182]; “The UAV may have one or more sensors. The UAV may comprise one or more vision sensors such as an image sensor… The UAV may further comprise other sensors that may be used to determine a location of the UAV, such as global positioning system (GPS) sensors, inertial sensors which may be used as part of or separately from an inertial measurement unit (IMU) (e.g., accelerometers, gyroscopes, magnetometers), lidar, ultrasonic sensors, acoustic sensors, WiFi sensors.” [0123]). Both He and GONG teach methods for obtaining and decoding mobile terminal information. However, GONG explicitly teaches the mobile terminal is an unmanned aerial vehicle, and the mobile terminal information comprises information generated by the unmanned aerial vehicle. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the monitoring method of He to also include the mobile terminal is an unmanned aerial vehicle, and the mobile terminal information comprises information generated by the unmanned aerial vehicle, as taught by GONG, with a reasonable expectation of success. Doing so improves safety for operating a UAVs (With regard to this reasoning, see at least [GONG, 0123 and 0182]). Regarding Claim 5, He discloses A data processing method, applied to a first mobile terminal and comprising: capturing an image data by the first mobile terminal (“The client may rely on the proposed SEI message. For example, the client may identify the ARC sub-picture and the associated reference sub-pictures based on the SEI message. The client may align the coordinate(s) between ARC sub-picture and reference sub-picture for proper motion compensation process.” [0107] ; “A sub-picture property SEI message refers to a SEI message including sub-picture property information that may indicate (and/or define properties or indicated defined properties of) one or more sub-pictures or tile groups across multiple layers or representations associated with the same source content region” [0092]); and transmitting the combined data to a control terminal through a first image transmission channel (“Table 1 provides an example of a sub-picture property SEI message syntax structure. Table 1 lists the number of sphere regions (viewports) and the sub-pictures covering the same sphere region. Table 1 also provides the coordinates of each repositioned sub-picture relative to the conformance cropping window specified by an active Sequence Parameter Set (SPS). num_source_content_regions_minus1 plus 1 may specify the number of source content region that are specified by the SEI message. source_content_region_position may specify the position of i-th source content region.” [0096-0097]). He does not explicitly teach combining the image data and the mobile terminal information to generate combined data wherein the generating mobile terminal information corresponding to the first mobile terminal comprises: acquiring at least one first state parameter corresponding to the first mobile terminal, wherein the at least one first state parameter is used for indicating a state of the first mobile terminal and a condition of an environment where the first mobile terminal is located; and generating, based on the at least one first state parameter, state information, wherein the mobile terminal information comprises the state information However, GONG combining the image data and the mobile terminal information to generate combined data (“The sensors onboard or off board the UAV may collect information such as location of the UAV, location of other objects, orientation of the UAV, or environmental information.” [0124]) wherein the generating mobile terminal information corresponding to the first mobile terminal comprises: acquiring at least one first state parameter corresponding to the first mobile terminal, wherein the at least one first state parameter is used for indicating a state of the first mobile terminal and a condition of an environment where the first mobile terminal is located; and generating, based on the at least one first state parameter, state information, wherein the mobile terminal information comprises the state information (“Various examples of sensors may include, but are not limited to, location sensors (e.g., global positioning system (GPS) sensors, mobile device transmitters enabling location triangulation), vision sensors (e.g., imaging devices capable of detecting visible, infrared, or ultraviolet light, such as cameras), proximity or range sensors (e.g., ultrasonic sensors, lidar, time-of-flight or depth cameras), inertial sensors (e.g., accelerometers, gyroscopes, inertial measurement units (IMUs)), altitude sensors, attitude sensors (e.g., compasses) pressure sensors (e.g., barometers), audio sensors (e.g., microphones) or field sensors (e.g., magnetometers, electromagnetic sensors).” [0123], (“The UAV may have one or more sensors… other sensors that may be used to determine a location of the UAV, such as global positioning system (GPS) sensors, inertial sensors…”) [0123]). Both He and GONG teach methods for obtaining and decoding mobile terminal information. However, GONG explicitly teaches combining the image data and the mobile terminal information to generate combined data, wherein the generating mobile terminal information corresponding to the first mobile terminal comprises: acquiring at least one first state parameter corresponding to the first mobile terminal, wherein the at least one first state parameter is used for indicating a state of the first mobile terminal and a condition of an environment where the first mobile terminal is located; and generating, based on the at least one first state parameter, state information, wherein the mobile terminal information comprises the state information. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the monitoring method of He to also include combining the image data and the mobile terminal information to generate combined data, wherein the generating mobile terminal information corresponding to the first mobile terminal comprises: acquiring at least one first state parameter corresponding to the first mobile terminal, wherein the at least one first state parameter is used for indicating a state of the first mobile terminal and a condition of an environment where the first mobile terminal is located; and generating, based on the at least one first state parameter, state information, wherein the mobile terminal information comprises the state information, as taught by GONG, with a reasonable expectation of success. Doing so improves safety for operating a UAVs (With regard to this reasoning, see at least [GONG, 0123 and 0124]). Regarding Claim 7, He discloses The data processing method according to claim 5, wherein the generating mobile terminal information corresponding to the first mobile terminal comprises: obtaining at least one first state parameter corresponding to the first mobile terminal, wherein the at least one first state parameter is used for indicating a state of the first mobile terminal and a condition of an environment where the first mobile terminal is located (“A single SEI message or parameter set describing the sub-picture properties (e.g., the correspondence between subpictures across multiple layers, and the mapping of subpictures in each layer to regions on the (e.g., 360-degree) video sphere) may simplify the mapping between the sub-picture and the corresponding original content region to facilitate sub-picture based applications.” [0094]); and generating, based on the at least one first state parameter, state information, wherein the first mobile terminal comprises an unmanned aerial vehicle, and the mobile terminal information comprises the state information (“Alternatively, sub-picture resolution may be derived from the tile group layout and the entire picture resolution. In order to map a (e.g., each) sub-picture to the corresponding sphere space, the sub-picture property SEI message may include and/or provide information such as layer ID, tile group ID, the coordinate of sub-picture and its mapping onto a sphere coordinate space.” [0095]). Regarding Claim 8, He discloses The data processing method according to claim 7, wherein the first mobile terminal comprises at least one sensor (“The peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.” [0043]); and the obtaining at least one first state parameter corresponding to the first mobile terminal comprises: collecting sensing data respectively corresponding to the at least one sensor to obtain at least one sensor sensing parameter, wherein the at least one first state parameter comprises the at least one sensor sensing parameter(“The peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.” [0043]); wherein the at least one first state parameter comprises at least one of following q parameters: an ambient light intensity, a vision parameter, a height, a power, a distance, and an ambient temperature(“The peripherals 138 may include one or more sensors, the sensors may be one or more of a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor.” [0043]). Regarding Claim 10, He discloses The data processing method according to claim7, wherein the generating mobile terminal information corresponding to the first mobile terminal further comprises: obtaining an information header and mobile terminal identity information corresponding to the first mobile terminal, wherein the mobile terminal information further comprises the information header and the mobile terminal identity information, and the information header and the mobile terminal identity information are used for identifying the first mobile terminal (“ The middle box 210 may generate, forward, identify, or parse a high-level syntax of input video bitstream(s), extract a sub-bitstream from one input video bitstream, and/or output the extracted sub-bitstream to the client or decoder 212. The middle box 210 may extract multiple sub-bitstreams from multiple input video bitstreams and combine them together to form a new output video bitstream delivering to the client or decoder 212.” [0076]). Regarding Claim 11, He discloses The data processing method according to claim 10, wherein the information header comprises at least one of following information: copyright information, encryption information, a product category code corresponding to the first mobile terminal, and a company code corresponding to the first mobile terminal (“The sub-picture resolution may be signaled explicitly (e.g., in a PPS or a tile group header). Alternatively, sub-picture resolution may be derived from the tile group layout and the entire picture resolution. In order to map a (e.g., each) sub-picture to the corresponding sphere space, the sub-picture property SEI message may include and/or provide information such as layer ID, tile group ID, the coordinate of sub-picture and its mapping onto a sphere coordinate space. The SEI message may list any or all sub-pictures available for viewport adaptive streaming and the region-wise packing of the sub-pictures. The decoder may identify the corresponding reference sub-picture which may or may not collocate with the current sub-picture based on such SEI message. Depending on the resolution of the reference sub-picture, the decoder may scale the sub-picture for ARC and align the coordinate between the current sub-picture and the reference sub-picture.” [0095]); the mobile terminal identity information comprises application identity identification information and module identification information (“The decoder may identify the corresponding reference sub-picture which may or may not collocate with the current sub-picture based on such SEI message. Depending on the resolution of the reference sub-picture, the decoder may scale the sub-picture for ARC and align the coordinate between the current sub-picture and the reference sub-picture.” [0095]); the application identity identification information comprises a product brand code of a manufacturer that manufactures the first mobile terminal, a field and type code corresponding to an application field to which the first mobile terminal belongs, a frequency band code corresponding to an operating frequency band corresponding to the first mobile terminal, and a channel code corresponding to an operating channel corresponding to the first mobile terminal (“For example, the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.” [0021]); and the module identification information represents a production number corresponding to the first mobile terminal (“To reconstruct the ARC sub-picture, the decoder may parse the SEI message and may identify those sub-pictures associated with the same region as tile group No. 2 and tile group No. 7. For instance, tile group No. 1 may be associated with the same content region as tile group No. 7 and may be available in a previously decoded picture. Tile group No. 8 may be associated with the same content region as tile group No. 2 and may be available in a previously decoded picture.” [0109]). Regarding Claim 12, He discloses The data processing method according to claim 7, wherein the generating mobile terminal information corresponding to the first mobile terminal further comprises: determining that any first state parameter of the at least one first state parameter does not meet a state condition corresponding to the any first state parameter, generating alerting information, wherein the mobile terminal information further comprises the alerting information, and the alerting information is used for informing the control terminal of the first mobile terminal being in an abnormal state (“each extracted sub-picture may be assigned to a different position in a new picture. As used herein, a sub-picture property SEI message refers to a SEI message including sub-picture property information that may indicate (and/or define properties or indicated defined properties of) one or more sub-pictures or tile groups across multiple layers or representations associated with the same source content region. In an embodiment, the sub-picture property information may include one or more additional indicators to indicate one or more recommended ARC switching points. Alternatively, the sub-picture property SEI message or like-type SEI message may include the indicators to indicate one or more recommended ARC switching points.” [0092]). Regarding Claim 13, He discloses The data processing method according to claim 12, wherein the generating mobile terminal information corresponding to the first mobile terminal further comprises: in response to generating the alerting information, generating mobile terminal control information(“FIG. 7 illustrates an example of sub-picture based ARC mechanism 700. With reference to FIG. 7 , each sub-picture of cube-map projection format may be coded into two resolutions. A sub-picture matching the viewport may be extracted from the high-resolution representation. The rest of the sub-pictures may be extracted from the low-resolution representation. In various embodiments, a sub-picture property SEI message may indicate those sub-pictures associated with the same source content region.” [0108]); performing an operation corresponding to the mobile terminal control information under control of the mobile terminal control information (“The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment.” [0036]); and in response to performing the operation corresponding to the mobile terminal control information under the control of the mobile terminal control information, generating notification information, wherein the mobile terminal information further comprises the notification information, and the notification information is used for informing the control terminal of the first mobile terminal having performed the operation corresponding to the mobile terminal control information (“each extracted sub-picture may be assigned to a different position in a new picture. As used herein, a sub-picture property SEI message refers to a SEI message including sub-picture property information that may indicate (and/or define properties or indicated defined properties of) one or more sub-pictures or tile groups across multiple layers or representations associated with the same source content region. In an embodiment, the sub-picture property information may include one or more additional indicators to indicate one or more recommended ARC switching points. Alternatively, the sub-picture property SEI message or like-type SEI message may include the indicators to indicate one or more recommended ARC switching points.” [0092]). Regarding Claim 14, He discloses The data processing method according to claim 7, wherein the generating mobile terminal information corresponding to the first mobile terminal further comprises: receiving at least one second state parameter transmitted from a second mobile terminal and corresponding to the second mobile terminal (“a method may comprise identifying a set of sub-pictures associated with a picture being available or recommended to perform ARC, and generating/sending an SEI message indicating one or more parameters of the set of sub-pictures.” [0172]); determining, based on the at least one first state parameter and the at least one second state parameter, a relative state between the first mobile terminal and the second mobile terminal (“In various embodiments, one or more parameters discussed above may include any of: a POC value for a high-resolution representation of the picture, a POC value of a corresponding lower-layer reference picture with a representation ID, one or more scaling filter coefficients, and a prediction method.” [0173]); and generating, based on the relative state, control suggestion information, wherein the mobile terminal information further comprises the control suggestion information, and the control suggestion information indicates suggesting the control terminal to perform an operation corresponding to the control suggestion information (“a method may comprise identifying a set of sub-pictures associated with a picture being available or recommended to perform ARC, and generating/sending an SEI message indicating one or more parameters of the set of sub-pictures.” [0172]). Regarding Claims 16, He A data processing method, applied to a control terminal and comprising: acquiring first combined data transmitted from a first mobile terminal through a first image transmission channel (“In various embodiments, a method may comprise receiving an SEI message including sub-picture property information for use with adaptive switching of a viewport, and performing ARC based on the sub-picture property information.” [0150]); parsing the first combined data to obtain the first image data and the first mobile terminal information, wherein the first image data is used for displaying (“ARC may reduce streaming start latency as the application usually buffers up to a certain number of decoded pictures and/or range of decoding time before displaying and, for example, in view of smaller sized pictures.” [0083]); in response to the first mobile terminal information, (“Each of the gNBs 180 a, 180 b, 180 c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the UL and/or DL, support of network slicing, dual connectivity, interworking between NR and E-UTRA, routing of user plane data towards User Plane Function (UPF) 184 a, 184 b, routing of control plane information towards Access and Mobility Management Function (AMF) 182 a, 182 b and the like. As shown in FIG. 1D, the gNBs 180 a, 180 b, 180 c may communicate with one another over an Xn interface.” [0066]); and transmitting the first control terminal control information to the first mobile terminal to control the first mobile terminal (“Table 1 provides an example of a sub-picture property SEI message syntax structure. Table 1 lists the number of sphere regions (viewports) and the sub-pictures covering the same sphere region. Table 1 also provides the coordinates of each repositioned sub-picture relative to the conformance cropping window specified by an active Sequence Parameter Set (SPS). num_source_content_regions_minus1 plus 1 may specify the number of source content region that are specified by the SEI message. source_content_region_position may specify the position of i-th source content region.” [0096-0097]). He does not explicitly teaches generating first control terminal control information for controlling the first mobile terminal However, GONG teaches generating first control terminal control information for controlling the first mobile terminal (“A communication channel may be provided between a user and a corresponding UAV that may be used to control operation of the UAV” [0157]… “The communication regulation zone may impose a flight response measure that affects operation of a communication unit of the UAV.” [0209]). Both He and GONG teach methods for obtaining and decoding mobile terminal information. However, GONG explicitly teaches generating first control terminal control information for controlling the first mobile terminal. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the monitoring method of He to also include generating first control terminal control information for controlling the first mobile terminal, as taught by GONG, with a reasonable expectation of success. Doing so improves safety for operating a UAVs (With regard to this reasoning, see at least [GONG, 0157 and 0209]). Regarding Claim 18, He discloses The data processing method according to claim 16, wherein the control terminal comprises a display screen, and the data processing method further comprises: displaying the first image data on the display screen, the transmitting the first control terminal control information to the first mobile terminal to control the first mobile terminal comprises: determining, based on the first control terminal control information, a second image transmission channel (“In various implementations, for example, multi-party video conferencing may benefit from using ARC to process picture and/or video (“picture/video”) frames, where one or more, or all participants (i.e., pictures/videos thereof) are displayed individually on a shared screen, and an active speaker (i.e., picture/video thereof) is displayed in a larger video size than the rest of the participants.” [0082]); and transmitting, through the second image transmission channel, the first control terminal control information to the first mobile terminal to control the first mobile terminal (“For example, WTRU 102 a may receive coordinated transmissions from gNB 180 a and gNB 180 b (and/or gNB 180 c).” [0065]). Regarding Claim 19, He discloses The data processing method according to claim 16, further comprising: receiving second combined data transmitted from a second mobile terminal, wherein the second combined data is data obtained by combining second image data captured by the second mobile terminal and second mobile terminal information corresponding to the second mobile terminal (“In various embodiments, a method may comprise receiving an SEI message including sub-picture property information for use with adaptive switching of a viewport, and performing ARC based on the sub-picture property information.” [0150]); parsing the second combined data to obtain the second image data and the second mobile terminal information, wherein the second image data is used for displaying; in response to the second mobile terminal information, generating second control terminal control information for controlling the second mobile terminal (“ARC may reduce streaming start latency as the application usually buffers up to a certain number of decoded pictures and/or range of decoding time before displaying and, for example, in view of smaller sized pictures.” [0083]); and transmitting the second control terminal control information to the second mobile terminal to control the second mobile terminal (“For example, WTRU 102 a may receive coordinated transmissions from gNB 180 a and gNB 180 b (and/or gNB 180 c).” [0065]). Regarding Claim 20, He discloses The data processing method according to claim 19, wherein the control terminal comprises a display screen, and the data processing method further comprises: displaying the first image data and the second image data on the display screen (“For example, WTRU 102 a may receive coordinated transmissions from gNB 180 a and gNB 180 b (and/or gNB 180 c).” [0065]). Regarding Claim 21, He discloses The data processing method according to claim 20, wherein the display screen comprises a display region, and the display region comprises a first display sub-region(“Under current motion-constrained tile set (MCTS)-based viewport adaptive 360-degree video streaming, sub-pictures that represent a viewport are usually delivered using a high resolution, and sub-pictures that represent other areas (e.g., areas not in the user's view) are usually delivered using a lower resolution. When the viewport changes, the corresponding resolutions of the sub-pictures are changed accordingly, and user experience is affected by switching latency of the high-quality viewport.” [0083]);; and the displaying the first image data and the second image data on the display screen comprises: displaying the first image data in the display region (“Under current motion-constrained tile set (MCTS)-based viewport adaptive 360-degree video streaming, sub-pictures that represent a viewport are usually delivered using a high resolution, and sub-pictures that represent other areas (e.g., areas not in the user's view) are usually delivered using a lower resolution. When the viewport changes, the corresponding resolutions of the sub-pictures are changed accordingly, and user experience is affected by switching latency of the high-quality viewport.” [0083]); and displaying the second image data in the first display sub-region, wherein in the first display sub-region, the second image data overlies a part, in the first display sub-region, of the first image data; or wherein the display screen comprises a display region, the display region comprises a first display sub-region and a second display sub-region, and the first display sub-region and the second display sub-region do not overlap (“FIG. 4 illustrates an example of viewport adaptive streaming mechanism 400. A viewport sub-picture (e.g., front view) is extracted from a large-resolution 360-degree video (Representation No. 1) and sub-pictures that represent other areas are extracted from a low-resolution 360-degree video (Representation No. 2). The extracted sub-pictures may be combined into a single representation, (e.g., as illustrated by a series of frames at the bottom right of FIG. 4 ). The resulting composed or merged viewport adaptive video is delivered to the user (or client) so that the user can experience a high-quality viewport with reduced delivery bandwidth. In case the user changes the viewport from front view to right view, the high-resolution right view sub-picture is extracted upon an IRAP picture, and a new video frame of the composed or merged video is formed that incorporates the high-resolution right view sub-picture and the low-resolution front sub-picture.” [0084]); and the displaying the first image data and the second image data on the display screen comprises: displaying the first image data in the first display sub-region (“The resulting composed or merged viewport adaptive video is delivered to the user (or client) so that the user can experience a high-quality viewport with reduced delivery bandwidth. In case the user changes the viewport from front view to right view, the high-resolution right view sub-picture is extracted upon an IRAP picture, and a new video frame of the composed or merged video is formed that incorporates the high-resolution right view sub-picture and the low-resolution front sub-picture.” [0084]); and displaying the second image data in the second display sub-region (“The resulting composed or merged viewport adaptive video is delivered to the user (or client) so that the user can experience a high-quality viewport with reduced delivery bandwidth. In case the user changes the viewport from front view to right view, the high-resolution right view sub-picture is extracted upon an IRAP picture, and a new video frame of the composed or merged video is formed that incorporates the high-resolution right view sub-picture and the low-resolution front sub-picture.” [0084]). Regarding Claim 23, He discloses The data processing method according to claim 16, wherein the first control terminal control information comprises an information header, control terminal identity information, and control interaction information (“ The middle box 210 may generate, forward, identify, or parse a high-level syntax of input video bitstream(s), extract a sub-bitstream from one input video bitstream, and/or output the extracted sub-bitstream to the client or decoder 212. The middle box 210 may extract multiple sub-bitstreams from multiple input video bitstreams and combine them together to form a new output video bitstream delivering to the client or decoder 212.” [0076]). ; and the information header comprises at least one of following information: copyright information, encryption information, a product category code corresponding to the control terminal, and a company code corresponding to the control terminal, and the encryption information is determined based on the first mobile terminal information (“The sub-picture resolution may be signaled explicitly (e.g., in a PPS or a tile group header). Alternatively, sub-picture resolution may be derived from the tile group layout and the entire picture resolution. In order to map a (e.g., each) sub-picture to the corresponding sphere space, the sub-picture property SEI message may include and/or provide information such as layer ID, tile group ID, the coordinate of sub-picture and its mapping onto a sphere coordinate space. The SEI message may list any or all sub-pictures available for viewport adaptive streaming and the region-wise packing of the sub-pictures. The decoder may identify the corresponding reference sub-picture which may or may not collocate with the current sub-picture based on such SEI message. Depending on the resolution of the reference sub-picture, the decoder may scale the sub-picture for ARC and align the coordinate between the current sub-picture and the reference sub-picture.” [0095]); the control terminal identity information comprises application identity identification information and module identification information (“The decoder may identify the corresponding reference sub-picture which may or may not collocate with the current sub-picture based on such SEI message. Depending on the resolution of the reference sub-picture, the decoder may scale the sub-picture for ARC and align the coordinate between the current sub-picture and the reference sub-picture.” [0095]); the application identity identification information comprises a product brand code corresponding to a manufacturer that manufactures the control terminal, a field and type code corresponding to an application field to which the control terminal belongs, and a frequency band code corresponding to an operating frequency band that corresponds to the control terminal (“For example, the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single-carrier FDMA (SC-FDMA), zero-tail unique-word DFT-Spread OFDM (ZT UW DTS-s OFDM), unique word OFDM (UW-OFDM), resource block-filtered OFDM, filter bank multicarrier (FBMC), and the like.” [0021]); the module identification information represents a production number corresponding to the control terminal (“To reconstruct the ARC sub-picture, the decoder may parse the SEI message and may identify those sub-pictures associated with the same region as tile group No. 2 and tile group No. 7. For instance, tile group No. 1 may be associated with the same content region as tile group No. 7 and may be available in a previously decoded picture. Tile group No. 8 may be associated with the same content region as tile group No. 2 and may be available in a previously decoded picture.” [0109]); and the control interaction information comprises at least one of following information: control mode information, mobile terminal identification information for indicating the first mobile terminal, transmission channel information, state control information, state mode information, shooting information, lighting information, and angle information, wherein the state control information is used for controlling a motion and an attitude of the first mobile terminal (“FIG. 3 illustrates an example of ARC mechanism 300. With reference to FIG. 3 , a high-resolution picture frame No. 3 may be encoded with inter-prediction from a reference picture frame No. 2 having a same resolution and a low-resolution picture frame No. 5 may be encoded with inter-prediction from a reference picture frame No. 4 having a same resolution. During ARC, the high-resolution picture frame No. 3 may be reconstructed by the reference picture frame No. 2 that is upscaled from a low-resolution picture frame No. 2 for motion compensation, and the low-resolution picture frame No. 5 may be reconstructed by the reference picture frame No. 4 that is downscaled from a high-resolution picture frame No. 4 for motion compensation. As a result, resolution switching(s) may happen or be performed on one or more non-IRAP frames.” [0081]); wherein the control terminal comprises a display screen, the display screen comprises a touch remote control region (“the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128,” [0035]); the touch remote control region comprises a plurality of virtual keys (“The MME 162 may be connected to each of the eNode- Bs 160 a, 160 b, 160 c in the RAN 104 via an S1 interface and may serve as a control node. For example, the MME 162 may be responsible for authenticating users of the WTRUs 102 a, 102 b, 102 c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102 a, 102 b, 102 c, and the like.” [0049]); and the control interaction information is determined based on an operation applied to the plurality of virtual keys (“The MME 162 may be connected to each of the eNode- Bs 160 a, 160 b, 160 c in the RAN 104 via an S1 interface and may serve as a control node. For example, the MME 162 may be responsible for authenticating users of the WTRUs 102 a, 102 b, 102 c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102 a, 102 b, 102 c, and the like.” [0049]). Regarding Claim 28, He discloses A non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions, when executed by a processor, cause implementing the data processing method according to claim 5 (“In an illustrative embodiment, any of the operations, processes, etc. described herein may be implemented as computer-readable instructions stored on a computer-readable medium. The computer-readable instructions may be executed by a processor of a mobile unit, a network element, and/or any other computing device.” [0187]). Regarding Claim 29, He discloses An electronic device, comprising: a processor and a memory; wherein the memory stores computer-executable instructions; when the processor executes the computer-executable instructions stored in the memory, the processor is caused to perform the data processing method according to claim 5 (“FIG. 1B is a system diagram illustrating an example WTRU 102. As shown in FIG. 1B, the WTRU 102 may include a processor 118, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 130, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and/or other peripherals 138, among others.” [0035]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED ALKIRSH whose telephone number is (703) 756-4503. The examiner can normally be reached M-F 9:00 am-5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FADEY JABR can be reached on (571) 272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AA/Examiner, Art Unit 3668 /Fadey S. Jabr/Supervisory Patent Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

May 30, 2024
Application Filed
Sep 29, 2025
Non-Final Rejection — §101, §103
Jan 02, 2026
Response Filed
Feb 09, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578724
Detection of Anomalous Trailer Behavior
2y 5m to grant Granted Mar 17, 2026
Patent 12410589
METHODS AND SYSTEMS FOR IMPLEMENTING A LOCK-OUT COMMAND ON LEVER MACHINES
2y 5m to grant Granted Sep 09, 2025
Patent 12403908
NON-SELFISH TRAFFIC LIGHTS PASSING ADVISORY SYSTEMS
2y 5m to grant Granted Sep 02, 2025
Patent 12370903
METHOD FOR TORQUE CONTROL OF ELECTRIC VEHICLE ON SLIPPERY ROAD SURFACE, AND TERMINAL DEVICE
2y 5m to grant Granted Jul 29, 2025
Patent 12325450
SYSTEMS AND METHODS FOR GENERATING MULTILEVEL OCCUPANCY AND OCCLUSION GRIDS FOR CONTROLLING NAVIGATION OF VEHICLES
2y 5m to grant Granted Jun 10, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
99%
With Interview (+53.7%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 43 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month