Prosecution Insights
Last updated: April 19, 2026
Application No. 18/566,382

Data Transmission Method and Apparatus, and Data Transmission Device and Storage Medium

Non-Final OA §102§103§112
Filed
Dec 01, 2023
Examiner
TODD, GREGORY G
Art Unit
2443
Tech Center
2400 — Computer Networks
Assignee
BEIJING CHJ INFORMATION TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
39%
Grant Probability
At Risk
1-2
OA Rounds
5y 3m
To Grant
34%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
171 granted / 443 resolved
-19.4% vs TC avg
Minimal -4% lift
Without
With
+-4.1%
Interview Lift
resolved cases with interview
Typical timeline
5y 3m
Avg Prosecution
45 currently pending
Career history
488
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
36.9%
-3.1% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
25.0%
-15.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 443 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This is a first office action in response to application filed, with the above serial number, on 01 December 2023 in which claims 12, 15-16 have been cancelled, claims 17-23 added and claims 1-11, 13-14, 17-23 are presented for examination. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5-7 and 20-22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claims recite “the second target physical address is a physical address in the data receiving device for storing the data to be transmitted” or similar. It is not clear if, as the data has been transmitted and received by the second target physical address, if the data is still ‘to be transmitted’ (after having been transmitted) or if the data to be transmitted is simply for antecedent basis purposes, it is indefinite. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-7, 11, 13-14, 17-22 is/are rejected under 35 U.S.C. 102a1 as being anticipated by Yang (hereinafter “Yang”, 2019/0005606). As per Claim 1, Yang discloses a data transmission method, comprising: receiving a data transmission request from a data initiating device (at least paragraph 31-33; send a read request or a write request with at least one struct page data object to the device driver 114 of the peripheral device 122, and the device driver 114 can determine the bus addresses based on the struct page data object.); querying a first target physical address and a first target virtual address according to the data transmission request, wherein the first target physical address is a physical address in the data initiating device for storing data to be transmitted and the first target virtual address is a virtual address for buffering the data to be transmitted (at least paragraph 33-35; user application 120 can invoke a first function call of the GPU 104 to request allocation of a region of GPU virtual memory 134. The first function call returns to the user application 120 a pointer 136 to the allocated region of the GPU virtual memory 134. The user application 120 can invoke a second function call of the GPU 104 to request that the allocated region of GPU virtual memory 134 be mapped to a region of GPU physical memory 140. The second function call returns to the user application 120 a GPU memory mapping handle which represents the mapping between the GPU virtual memory 134 and the GPU physical memory 140. The user application 120 can then invoke and pass the GPU memory mapping handle to a third function call of the GPU 104 to request mapping of the GPU physical memory 140 to the CPU virtual memory space associated with the user application 120. The GPU virtual memory address pointer 136 is associated with a CPU virtual address 138 returned to the user application 120 by the third function call. Based on the GPU memory mapping handle, the data structure 116 can be established and used to map the CPU virtual address 138 to the GPU physical memory region 140); reading the data to be transmitted from the first target physical address to the first target virtual address by using a direct memory access technology (at least paragraph 36-37; user-space application 120 initiates direct input/output operations using the CPU virtual address 138 associated with the GPU physical address 140. The operating system kernel sends the data structure 116 associated with the CPU virtual address 138 to the device driver 114 associated with the peripheral device 122. The peripheral device 122 determines the bus address of the GPU memory 108 based on the data structure 116. The peripheral device 122 initiates DMA transfer of data with the GPU memory 108 based on the bus address of the GPU memory 108); and transmitting the data to be transmitted from the first target virtual address to a data receiving device through a preset data transmission network (at least paragraph 36-37; enabling DMA data transfer between the peripheral device 122 and the GPU memory 108; direct input/output operations using the CPU virtual address 138 associated with the GPU physical address 140. The operating system kernel sends the data structure 116 associated with the CPU virtual address 138 to the device driver 114 associated with the peripheral device 122). Claims 13-14 do not, in substance, add or define any additional limitations over claim 1 and therefore is/are rejected for similar reasons, supra. As per Claim 2. The method according to claim 1, wherein the data transmission request carries a device program identifier (at least paragraph 40; user-space process 120 can include, e.g., a video rendering program, an animation rendering program, an image processing program, a machine learning program, a mathematical simulation program, an application program for controlling a vehicle); and querying the first target physical address and the first target virtual address according to the data transmission request comprises: querying the first target physical address corresponding to the device program identifier (at least paragraph 41, 44; GPU driver 118 maps the allocated GPU virtual memory 134 to a region of the GPU physical memory 140, and locks the mapping between the GPU virtual memory 134 and the GPU physical memory 140. The term “locking” refers to maintaining the mapping between the allocated GPU virtual memory 134 and the GPU physical memory 140 until the user application 120 issues an instruction to unlock or release the mapping; the CPU virtual address 138 of the user-space application 120 is linked to the GPU memory 108 that has been allocated to the user-space application 120); and querying the first target virtual address corresponding to the first target physical address according to a first address mapping relationship (at least paragraph 41-42; GPU driver 118 maps the allocated GPU virtual memory 134 to a region of the GPU physical memory 140, and locks the mapping between the GPU virtual memory 134 and the GPU physical memory 140. The term “locking” refers to maintaining the mapping between the allocated GPU virtual memory 134 and the GPU physical memory 140 until the user application 120 issues an instruction to unlock or release the mapping). Claim 17 does not, in substance, add or define any additional limitations over claim 2 and therefore is rejected for similar reasons, supra. As per Claim 3. The method according to claim 1, wherein reading the data to be transmitted from the first target physical address to the first target virtual address by using the direct memory access technology comprises: in a case that a data type of the data to be transmitted is a target data type, reading the data to be transmitted from the first target physical address to the first target virtual address by using the direct memory access technology; wherein the target data type is a data type to which data targeted by the direct memory access technology belong (at least paragraph 40, 110; DMA techniques described above can be used in self-driving vehicles that process vast amounts of data, such as image data and other data captured by various sensors of the vehicle; A user-space process or application 120 allocates GPU virtual memory 134 and receives a pointer 136 to the GPU virtual memory 134. The user-space process 120 can include, e.g., a video rendering program, an animation rendering program, an image processing program, a machine learning program, a mathematical simulation program, an application program for controlling a vehicle). Claim 18 does not, in substance, add or define any additional limitations over claim 3 and therefore is rejected for similar reasons, supra. As per Claim 4. The method according to claim 1, wherein transmitting the data to be transmitted from the first target virtual address to the data receiving device through the preset data transmission network comprises: transmitting the data to be transmitted from the first target virtual address to a first transmission device through the preset data transmission network, wherein the first transmission device is connected to the data receiving device and the first transmission device is configured to transmit the data to be transmitted to the data receiving device by using the direct memory access technology (at least Fig. 1; from CPU virtual address 138 initiating direct I/O to device driver 114 to initiate DMA to device 122 and direct memory access DMA to data receiving device GPU 140). Claim 19 does not, in substance, add or define any additional limitations over claim 4 and therefore is rejected for similar reasons, supra. As per Claim 5. The method according to claim 4, wherein transmitting the data to be transmitted from the first target virtual address to the first transmission device through the preset data transmission network comprises: querying a second target virtual address corresponding to the first target virtual address according to a second address mapping relationship, wherein the second target virtual address is a virtual address in the first transmission device for buffering the data to be transmitted (at least Fig. 1; par. 35, 29-30, 46; transmission from the first target virtual address (eg. User application 120 CPU virtual address) to device 122; From the CPU's perspective, each device connected to the system bus 124 is allocated a range of physical addresses, and there exists a mapping between a physical address and a bus address; CPU virtual address 138 that is mapped to the allocated GPU memory 10); and transmitting the data to be transmitted from the first target virtual address to the second target virtual address through the preset data transmission network (at least Fig. 1; par. 35, 29; transmission from the first target virtual address (eg. User application 120 CPU virtual address) to device 122); wherein the first transmission device is configured to transmit the data to be transmitted from the second target virtual address to a second target physical address corresponding to the second target virtual address by using the direct memory access technology and the second target physical address is a physical address in the data receiving device for storing the data to be transmitted (at least Fig. 1; par. 35, 29; transmission from the first target virtual address (eg. User application 120 CPU virtual address) to device 122 to data receiving device 104 virtual address 134 with direct data transfer). Claim 20 does not, in substance, add or define any additional limitations over claim 5 and therefore is rejected for similar reasons, supra. As per Claim 6. The method according to claim 1, wherein transmitting the data to be transmitted from the first target virtual address to the data receiving device through the preset data transmission network comprises: transmitting the data to be transmitted from the first target virtual address to a second transmission device through the preset data transmission network, wherein the second transmission device is configured to transmit the data to be transmitted to a first transmission device through the preset data transmission network, the first transmission device is connected to the data receiving device, and the first transmission device is configured to transmit the data to be transmitted to the data receiving device by using the direct memory access technology (at least Fig. 1; from CPU virtual address 138 initiating direct I/O to device driver 114 (second transmission device) to initiate DMA to device 122 (first transmission device) and direct memory access DMA to data receiving device GPU 140). Claim 21 does not, in substance, add or define any additional limitations over claim 6 and therefore is rejected for similar reasons, supra. As per Claim 7. The method according to claim 6, wherein transmitting the data to be transmitted from the first target virtual address to the second transmission device through the preset data transmission network comprises: querying a third target virtual address corresponding to the first target virtual address according to a third address mapping relationship, wherein the third target virtual address is a virtual address in the second transmission device for buffering the data to be transmitted (at least Fig. 1; par. 35, 29; transmission from the first target virtual address (eg. User application 120 CPU virtual address) to device 122; CPU virtual address 138 that is mapped to the allocated GPU memory 10 [claim 7 mapping similar to claim 5 with another additional device between, eg. device driver]); and transmitting the data to be transmitted from the first target virtual address to the third target virtual address through the preset data transmission network (at least Fig. 1; par. 35, 29; transmission from the first target virtual address (eg. User application 120 CPU virtual address) to device 122); wherein the second transmission device is configured to transmit the data to be transmitted from the third target virtual address to a second target virtual address corresponding to the third target virtual address through the preset data transmission network, the second target virtual address is a virtual address in the first transmission device for buffering the data to be transmitted, and the first transmission device is configured to transmit the data to be transmitted from the second target virtual address to a second target physical address corresponding to the second target virtual address by using the direct memory access technology, and the second target physical address is a physical address in the data receiving device for storing the data to be transmitted (at least Fig. 1; par. 35, 29; transmission from the first target virtual address (eg. User application 120 CPU virtual address) to device 122 to data receiving device 104 virtual address 134 with direct data transfer). Claim 22 does not, in substance, add or define any additional limitations over claim 7 and therefore is rejected for similar reasons, supra. As per Claim 11. The method according to claim 1, before receiving the data transmission request from the data initiating device, further comprising: during a startup process, loading network configuration data corresponding to the preset data transmission network from an electrically erasable programmable read only memory (at least paragraph 114, 116; EEPROM). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 8-10, 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Kobayashi et al (hereinafter “Kobayashi”, 2019/0297025). As per Claim 8, Yang fails to disclose wherein the preset data transmission network comprises a time-sensitive network. However, the use and advantages for using such a system was well known to one skilled in the art before the effective filing date of the claimed invention as evidenced by the teachings of Kobayashi. Kobayashi discloses, in an analogous art, TSN being a well known network type in cooperation with DMA processing (at least paragraph 23-26). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the use of Kobayashi’s TSN with Yang as Kobayashi teaches such a system is aimed to have a high degree of real-time capability and to have a high degree of reliability and is applicable to in-vehicle networks, and thus using TSN in DMA applications are an obvious combination as both are directed toward transmitting data in time critical environments. As per Claim 9. The method according to claim 8, wherein transmitting the data to be transmitted from the first target virtual address to the data receiving device through the preset data transmission network comprises: adding the data to be transmitted to a transmission queue of the time-sensitive network, wherein the transmission queue is configured to determine a first transmission order of the data to be transmitted according to a target transmission priority and a timestamp carried by the data to be transmitted and transmit the data to be transmitted from the first target virtual address to the data receiving device according to the first transmission order, and the target transmission priority is a transmission priority to which a data type of the data to be transmitted belongs (at least Kobayashi paragraph 23-26, 49-50; IEEE 802.1Qbv standard with TSN; transferring data from the main memory to the transmission buffers used by the NIC, is performed in an asynchronous manner with respect to the timings of transmission control; selects the transmission buffer having the degree of priority for which transmission is currently possible). As per Claim 10. The method according to claim 8, wherein transmitting the data to be transmitted from the first target virtual address to the data receiving device through the preset data transmission network comprises: adding the data to be transmitted to a transmission queue of the time-sensitive network, wherein the data transmission sequence is configured to determine a second transmission order of the data to be transmitted according to a received time stamp carried by the data to be transmitted and transmit the data to be transmitted from the first target virtual address to the data receiving device according to the second transmission order (at least paragraph 23-26; gate control list; transmission control according to the IEEE 802.1Qbv standard is performed with respect to Ethernet frames that have already been input to the transmission buffers used in the NIC). Claim 23 does not, in substance, add or define any additional limitations over claims 9-10 and therefore is rejected for similar reasons, supra. Conclusion The prior art made of record and not relied upon considered pertinent to applicant's disclosure is indicated in PTO form 892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to GREGORY G TODD whose telephone number is (303)297-4763. The examiner can normally be reached 8:30-5 MST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Nicholas Taylor can be reached on (571)272-3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GREGORY TODD/ Primary Examiner, Art Unit 2443
Read full office action

Prosecution Timeline

Dec 01, 2023
Application Filed
Mar 07, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12580996
SYSTEMS, METHODS, AND MEDIA FOR PREDICTING DATA FOR PRECACHING AND/OR RECACHING AT A COMPUTER CACHE OF A COMPUTER ENVIRONMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12574347
VEHICLE NETWORK ADDRESS ASSIGNMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12556472
METHOD AND DEVICE FOR PARALLELLY SENDING ROUTE ADVERTISEMENT MESSAGES
2y 5m to grant Granted Feb 17, 2026
Patent 12513048
APPARATUS AND METHOD FOR GENERATING NETWORK SLICE IN WIRELESS COMMUNICATION SYSTEM
2y 5m to grant Granted Dec 30, 2025
Patent 12500961
MULTIZONE MIGRATION SERVICES
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
39%
Grant Probability
34%
With Interview (-4.1%)
5y 3m
Median Time to Grant
Low
PTA Risk
Based on 443 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month